The Long Term Evolution (LTE) networks provides mobile users with perpetually increasing ubiquitous access to a rich selection of high quality multimedia. This work proposes a Hybrid Unicast Broadcast Synchronisation (HUBS) framework which works within the LTE standard to synchronously deliver multi stream video content by monitoring the radio bearer queues to establish a time lead or lag between broadcast and unicast streams. Since unicast and eMBMS share the same radio resources, the number of Sub-Frames allocated to the eMBMS transmission are then dynamically increased or decreased to minimise the average lead/lag time offset between the streams. Dynamic allocation showed improvements for all services across the cell, whilst keeping streams synchronised despite increased user loading.
Audio editing is performed at scale in the production of radio, but often the tools used are poorly targeted towards the task at hand. There are a number of audio analysis techniques that have the potential to aid radio producers, but without a detailed understanding of their process and requirements, it can be difficult to apply these methods. To aid this understanding, a study of radio production practice was conducted on three varied case studies – a news bulletin, drama and documentary. It examined the audio/metadata workflow, the roles and motivations of the producers and environmental factors.
ACTION-TV proposes an innovative mode of user interaction for broadcasting to relax the rigid and passive nature of present broadcasting ecosystems. It has two key aims:
– A group of users can take part in TV shows providing a sense of immersion into the show and seamless engagement with the content;
– Users are encouraged to use TV shows as a mean of social engagement as well as keeping them and their talents more visible across social circles.
These aims will be achieved by developing an advanced digital media access and delivery platform that enables augmenting traditional audio-visual broadcastings with novel interactivity elements to encourage natural engagement with the content. Mixed-reality technologies will be developed to insert users into pre-recorded content, which will be made ‘responsive’ to users’ actions by ingeniously using a set of auxiliary streams. Potentials of media cloud technologies will be harnessed to personalise ACTION-TV-enabled broadcast content for a group of collaborating users based on their actions. As a result, content producers, for the first time, will be able to generate creative media applications with richer content level user interactivity. Cloud-service providers will be able to monetise their infrastructure through leveraging the increased demand for strategically located in-network media processing. Participating users will be able to share personalised content with their social peers. In this way, end users will have access to more engaging personalised content as well as socialise themselves with community members having common interests. ACTION-TV supports a range of applications from an individual trying out a garment in a TV advert to a group of users interactively attending a TV talent show with the convenience of staying at home. However, ways of utilising the proposed interactivity concept are endless and only limited by the imagination of inspiring content producers.
Integration of the FreeEye browsing interface and design of the Sally Potter Archive at SP-ARK.org.
SP-ARK is an interactive online project based on the multi-media archive of film-maker Sally Potter.
A unique educational resource, SP-ARK is designed to enhance your access to, and knowledge of, film and filmmaking, whatever your interests.
By telling the story of a film by showing every aspect of the production –from the initial idea through screenwriting, budgeting and casting to the composition of an individual shot – SP-ARK invites you to become part of a new generation of film scholars and viewers who understand film inside out.
Here is the timed presentation of the knowledge transfer projects related to development of SP-ARK.org.
This research comprises a series of user experience studies in Human Computer Interaction (HCI) that: i) analyse user aspects of stereoscopic 3D video interaction, ii) propose technical solutions and iii) give design guidelines for intuitive interaction with stereoscopic 3D video content.
One of the main emerging challenges of future multimedia platforms is the development of three-dimensional (3D) display technology, resulting in a plethora of research activities in the video research community focusing on this topic. This emerging technology is capable of bringing a whole new experience to the end user by offering a 3D real immersive feeling experience. However, research towards meaningful user interaction with the real 3D content is still at its early stages.
Having this in mind, the main aim of this research activity is to provide a comprehensive understanding and investigation about how to develop an interactive 3D video platform that delivers intuitive interaction with 3D video content. The key elements of the proposed platform include effective interaction with the content and the design of appropriate UI modality. Moreover, in order to specify the requirement for the designs, a number of studies into the implication of the 3D content delivery mechanism as well as the best user practices are being conducted.