Increasing throughput rates and technical developments in video streaming over the Internet offer an attractive solution for the distribution of immersive 3D multi-view. Nevertheless, robustness of video streaming is subject to its utilisation of efficient error resiliency and content aware adaptation techniques. Dynamic network characteristics resulting in frequent congestions may prevent video packets from being delivered in a timely manner. Packet delivery failures may become prominent, degrading 3D immersive video experience significantly. In order to overcome this problem, a novel view recovery technique for 3D free-viewpoint video is introduced to maintain 3D video quality in a cost-effective manner. In this concept, the undelivered (discarded) views as a result of adaptation in the network are recovered with high quality at the receiver side, using Side Information (SI) and the delivered frames of neighbouring views. The proposed adaptive 3D multi-view video streaming scheme is tested using Dynamic Adaptive Streaming over HTTP (MPEG-DASH) standard. Tests using the proposed adaptive technique have revealed that the perceptual 3D video quality under adverse network conditions is significantly improved thanks to the utilisation of the extra side information in view recovery.
The Long Term Evolution (LTE) networks provides mobile users with perpetually increasing ubiquitous access to a rich selection of high quality multimedia. This work proposes a Hybrid Unicast Broadcast Synchronisation (HUBS) framework which works within the LTE standard to synchronously deliver multi stream video content by monitoring the radio bearer queues to establish a time lead or lag between broadcast and unicast streams. Since unicast and eMBMS share the same radio resources, the number of Sub-Frames allocated to the eMBMS transmission are then dynamically increased or decreased to minimise the average lead/lag time offset between the streams. Dynamic allocation showed improvements for all services across the cell, whilst keeping streams synchronised despite increased user loading.
Application offloading is an emerging area focus towards leveraging the huge computation resources available in cloud to avail for the mobile. This research area is quite challenging due to heterogeneity of applications, mobile and cloud resources. Offload becomes even more complex when we take vulnerable nature of wireless communication into account. In our research, we formulated the offloading research problem in terms of contextual modelling of cloud, mobile, application and wireless network in terms of their parameters and then we discuss the feasibility of application partitioning and offloading by representing an application in the form of a graph. We use two application scenarios: ultra-high video coding and large-scale image retrieval.
Audio editing is performed at scale in the production of radio, but often the tools used are poorly targeted towards the task at hand. There are a number of audio analysis techniques that have the potential to aid radio producers, but without a detailed understanding of their process and requirements, it can be difficult to apply these methods. To aid this understanding, a study of radio production practice was conducted on three varied case studies – a news bulletin, drama and documentary. It examined the audio/metadata workflow, the roles and motivations of the producers and environmental factors.
The starting point for the project is a challenge and opportunity related to participatory design of the built environment. In a recent report on architecture and planning in the UK, current design processes were criticised for not being participatory enough in representing the needs and aspirations of local residents, or respecting the history and cultural heritage of areas subject to re-development (Farrell Review 2013). Amongst the recommendations of this report were suggestions to take a more holistic view of places and their identities, to achieve a new level of proactive public engagement in planning, and to draw on knowledge of the past in planning for the future (op cit). A specific opportunity to implement some of these suggestions already exists in the form of the Localism Act of 2011, which empowers communities to create neighbourhood plans for the development of their areas (DCLG 2010). However, findings on the uptake of these powers indicate some reluctance of communities to engage with the initiative and a conservative approach to planning which fails to meet government targets for housing and economic growth (Gallent 2013). Essentially, new methods of pro-active community engagement are needed.
Community radio and TV have been used for many years, to empower communities around the world to take more initiative in their own development. In our own prior work we have used mobile digital storytelling to provide a narrative film library to a rural community in India (Frohlich et al 2007). In two further Digital Economy projects we scaled up this approach within South Africa and Preston UK, to support audio-visual community journalism for development. In South Africa we developed the Com-Me open source toolkit for community media sharing, and in Preston we developed an ‘insight journalism’ methodology for applying this to local innovation (Frohlich et al 2009 – EP/E006698/1, Blum-Ross et al 2012 – EP/H007296/1). Following other initiatives in Australia and Italy (Foth et al 2007, Galbiati et al 2010), we would now like to apply our storytelling technology and approach to local urban design. The approach also extends several other projects within the Communities and Culture Network+ and would benefit from their findings and input. These include Plugin narratives, New knowledge networks in communities, Cultural heritage and built environment, Hyperlocal government engagement online, Screen cultures, Trajectories to community engagement, Public engagement and cultures of expertise.
Paper has been with us for many thousands of years and still has properties that we continue to enjoy in the digital age. Rather than replacing paper with e-readers and screen technology, we aim in this project to connect paper to digital information, especially sound.
In a previous research project called Interactive Newsprint we explored the properties of connecting paper to the web through interactive regions which registered human touch and played back associated sound. One of the challenges of the project was in printing these regions and associated electronic components on the paper itself. Light tags is a new printed electronics technology from Surrey University which makes this easier. It has the potential to unlock a number of commercial applications of interactive paper in the print and packaging industry. In this project we aim to create proof of concept demonstrators of the technology, and collect feedback from both end users and industry representatives.
The project runs for 9 months from 1st July 2014 and involves a collaboration between Digital World Research Centre and the Advanced Technology Institute at the University of Surrey. It will be done in partnership with the Welsh Centre for Printing and Coating at Swansea University. The project is co-funded by the EPSRC Impact Acceleration Account (IAA) at Surrey University, and an Academic Expertise for Business (A4B) grant to Swansea University. We would also like to acknowledge the role of the EU COST FP1104 network on New opportunities for print media and packaging in facilitating this collaboration.
ACTION-TV proposes an innovative mode of user interaction for broadcasting to relax the rigid and passive nature of present broadcasting ecosystems. It has two key aims:
– A group of users can take part in TV shows providing a sense of immersion into the show and seamless engagement with the content;
– Users are encouraged to use TV shows as a mean of social engagement as well as keeping them and their talents more visible across social circles.
These aims will be achieved by developing an advanced digital media access and delivery platform that enables augmenting traditional audio-visual broadcastings with novel interactivity elements to encourage natural engagement with the content. Mixed-reality technologies will be developed to insert users into pre-recorded content, which will be made ‘responsive’ to users’ actions by ingeniously using a set of auxiliary streams. Potentials of media cloud technologies will be harnessed to personalise ACTION-TV-enabled broadcast content for a group of collaborating users based on their actions. As a result, content producers, for the first time, will be able to generate creative media applications with richer content level user interactivity. Cloud-service providers will be able to monetise their infrastructure through leveraging the increased demand for strategically located in-network media processing. Participating users will be able to share personalised content with their social peers. In this way, end users will have access to more engaging personalised content as well as socialise themselves with community members having common interests. ACTION-TV supports a range of applications from an individual trying out a garment in a TV advert to a group of users interactively attending a TV talent show with the convenience of staying at home. However, ways of utilising the proposed interactivity concept are endless and only limited by the imagination of inspiring content producers.