Increasing throughput rates and technical developments in video streaming over the Internet offer an attractive solution for the distribution of immersive 3D multi-view. Nevertheless, robustness of video streaming is subject to its utilisation of efficient error resiliency and content aware adaptation techniques. Dynamic network characteristics resulting in frequent congestions may prevent video packets from being delivered in a timely manner. Packet delivery failures may become prominent, degrading 3D immersive video experience significantly. In order to overcome this problem, a novel view recovery technique for 3D free-viewpoint video is introduced to maintain 3D video quality in a cost-effective manner. In this concept, the undelivered (discarded) views as a result of adaptation in the network are recovered with high quality at the receiver side, using Side Information (SI) and the delivered frames of neighbouring views. The proposed adaptive 3D multi-view video streaming scheme is tested using Dynamic Adaptive Streaming over HTTP (MPEG-DASH) standard. Tests using the proposed adaptive technique have revealed that the perceptual 3D video quality under adverse network conditions is significantly improved thanks to the utilisation of the extra side information in view recovery.
The Long Term Evolution (LTE) networks provides mobile users with perpetually increasing ubiquitous access to a rich selection of high quality multimedia. This work proposes a Hybrid Unicast Broadcast Synchronisation (HUBS) framework which works within the LTE standard to synchronously deliver multi stream video content by monitoring the radio bearer queues to establish a time lead or lag between broadcast and unicast streams. Since unicast and eMBMS share the same radio resources, the number of Sub-Frames allocated to the eMBMS transmission are then dynamically increased or decreased to minimise the average lead/lag time offset between the streams. Dynamic allocation showed improvements for all services across the cell, whilst keeping streams synchronised despite increased user loading.
Application offloading is an emerging area focus towards leveraging the huge computation resources available in cloud to avail for the mobile. This research area is quite challenging due to heterogeneity of applications, mobile and cloud resources. Offload becomes even more complex when we take vulnerable nature of wireless communication into account. In our research, we formulated the offloading research problem in terms of contextual modelling of cloud, mobile, application and wireless network in terms of their parameters and then we discuss the feasibility of application partitioning and offloading by representing an application in the form of a graph. We use two application scenarios: ultra-high video coding and large-scale image retrieval.
The starting point for the project is a challenge and opportunity related to participatory design of the built environment. In a recent report on architecture and planning in the UK, current design processes were criticised for not being participatory enough in representing the needs and aspirations of local residents, or respecting the history and cultural heritage of areas subject to re-development (Farrell Review 2013). Amongst the recommendations of this report were suggestions to take a more holistic view of places and their identities, to achieve a new level of proactive public engagement in planning, and to draw on knowledge of the past in planning for the future (op cit). A specific opportunity to implement some of these suggestions already exists in the form of the Localism Act of 2011, which empowers communities to create neighbourhood plans for the development of their areas (DCLG 2010). However, findings on the uptake of these powers indicate some reluctance of communities to engage with the initiative and a conservative approach to planning which fails to meet government targets for housing and economic growth (Gallent 2013). Essentially, new methods of pro-active community engagement are needed.
Community radio and TV have been used for many years, to empower communities around the world to take more initiative in their own development. In our own prior work we have used mobile digital storytelling to provide a narrative film library to a rural community in India (Frohlich et al 2007). In two further Digital Economy projects we scaled up this approach within South Africa and Preston UK, to support audio-visual community journalism for development. In South Africa we developed the Com-Me open source toolkit for community media sharing, and in Preston we developed an ‘insight journalism’ methodology for applying this to local innovation (Frohlich et al 2009 – EP/E006698/1, Blum-Ross et al 2012 – EP/H007296/1). Following other initiatives in Australia and Italy (Foth et al 2007, Galbiati et al 2010), we would now like to apply our storytelling technology and approach to local urban design. The approach also extends several other projects within the Communities and Culture Network+ and would benefit from their findings and input. These include Plugin narratives, New knowledge networks in communities, Cultural heritage and built environment, Hyperlocal government engagement online, Screen cultures, Trajectories to community engagement, Public engagement and cultures of expertise.
ACTION-TV proposes an innovative mode of user interaction for broadcasting to relax the rigid and passive nature of present broadcasting ecosystems. It has two key aims:
– A group of users can take part in TV shows providing a sense of immersion into the show and seamless engagement with the content;
– Users are encouraged to use TV shows as a mean of social engagement as well as keeping them and their talents more visible across social circles.
These aims will be achieved by developing an advanced digital media access and delivery platform that enables augmenting traditional audio-visual broadcastings with novel interactivity elements to encourage natural engagement with the content. Mixed-reality technologies will be developed to insert users into pre-recorded content, which will be made ‘responsive’ to users’ actions by ingeniously using a set of auxiliary streams. Potentials of media cloud technologies will be harnessed to personalise ACTION-TV-enabled broadcast content for a group of collaborating users based on their actions. As a result, content producers, for the first time, will be able to generate creative media applications with richer content level user interactivity. Cloud-service providers will be able to monetise their infrastructure through leveraging the increased demand for strategically located in-network media processing. Participating users will be able to share personalised content with their social peers. In this way, end users will have access to more engaging personalised content as well as socialise themselves with community members having common interests. ACTION-TV supports a range of applications from an individual trying out a garment in a TV advert to a group of users interactively attending a TV talent show with the convenience of staying at home. However, ways of utilising the proposed interactivity concept are endless and only limited by the imagination of inspiring content producers.
Integration of the FreeEye browsing interface and design of the Sally Potter Archive at SP-ARK.org.
SP-ARK is an interactive online project based on the multi-media archive of film-maker Sally Potter.
A unique educational resource, SP-ARK is designed to enhance your access to, and knowledge of, film and filmmaking, whatever your interests.
By telling the story of a film by showing every aspect of the production –from the initial idea through screenwriting, budgeting and casting to the composition of an individual shot – SP-ARK invites you to become part of a new generation of film scholars and viewers who understand film inside out.
Here is the timed presentation of the knowledge transfer projects related to development of SP-ARK.org.
It is well-known that photo, video and web content on a mobile phone is difficult to share with a group in settings where there is no other digital technology to upload it to. Yet, these settings may have existing analogue televisions that could be used as public displays. By filming the screen itself from an overhead camera and taking a sound feed from the headphone socket of the phone, Com-Cam is a low cost device for relaying the screen and sound of a mobile phone to an analogue television. Cables from Com-Cam plug into the scart or audio-visual sockets of a TV switched to its AV input channel setting. A number of different versions for the device have been tried. The best one includes an adjustable “lamp-like” structure with a whiteboard or chalkboard base. The overhead camera is in the head of the lamp-like device which can be manually focussed, and moved up or down over the screen of any mobile phone which sits underneath. The mobile phone controls are left accessible while relaying the screen image at an appropriate scale to fill the TV screen.
The overhead camera also supports the use of Com-Cam as a whiteboard and overhead projector for making group
presentations on a TV. Writing or sketching on the whiteboard or chalkboard base appears on the TV screen. Usefully this
can also be done on paper for quick removal and replacement. Additionally, printed documents and other objects and materials can also be placed on the base for presentation on the TV screen, as with an overhead projector. Com-Cam has a built-in microphone which can be switched on in these situations for amplifying the speakers’ comments through the TV speakers. The electronic components of the device are readily available off the shelf in most countries and cost about £10. Our design shows how these components can be mounted on a simple lampstand made out of local materials such as boxes and rulers.