Hybrid LTE Unicast Broadcast

The Long Term Evolution (LTE) networks provides mobile users with perpetually increasing ubiquitous access to a rich selection of high quality multimedia. This work proposes a Hybrid Unicast Broadcast Synchronisation (HUBS) framework which works within the LTE standard to synchronously deliver multi stream video content by monitoring the radio bearer queues to establish a time lead or lag between broadcast and unicast streams. Since unicast and eMBMS share the same radio resources, the number of Sub-Frames allocated to the eMBMS transmission are then dynamically increased or decreased to minimise the average lead/lag time offset between the streams. Dynamic allocation showed improvements for all services across the cell, whilst keeping streams synchronised despite increased user loading.

Application Partitioning Framework in Mobile Cloud Computing

Application offloading is an emerging area focus towards leveraging the huge computation resources available in cloud to avail for the mobile. This research area is quite challenging due to heterogeneity of applications, mobile and cloud resources. Offload becomes even more complex when we take vulnerable nature of wireless communication into account. In our research, we formulated the offloading research problem in terms of contextual modelling of cloud, mobile, application and wireless network in terms of their parameters and then we discuss the feasibility of application partitioning and offloading by representing an application in the form of a graph. We use two application scenarios: ultra-high video coding and large-scale image retrieval.

ACTION-TV: User interaction aware content generation and distribution for next generation social television

User interaction aware content generation and distribution for next generation social television

User interaction aware content generation and distribution for next generation social television

ACTION-TV proposes an innovative mode of user interaction for broadcasting to relax the rigid and passive nature of present broadcasting ecosystems. It has two key aims:

– A group of users can take part in TV shows providing a sense of immersion into the show and seamless engagement with the content;
– Users are encouraged to use TV shows as a mean of social engagement as well as keeping them and their talents more visible across social circles.

These aims will be achieved by developing an advanced digital media access and delivery platform that enables augmenting traditional audio-visual broadcastings with novel interactivity elements to encourage natural engagement with the content. Mixed-reality technologies will be developed to insert users into pre-recorded content, which will be made ‘responsive’ to users’ actions by ingeniously using a set of auxiliary streams. Potentials of media cloud technologies will be harnessed to personalise ACTION-TV-enabled broadcast content for a group of collaborating users based on their actions. As a result, content producers, for the first time, will be able to generate creative media applications with richer content level user interactivity. Cloud-service providers will be able to monetise their infrastructure through leveraging the increased demand for strategically located in-network media processing. Participating users will be able to share personalised content with their social peers. In this way, end users will have access to more engaging personalised content as well as socialise themselves with community members having common interests. ACTION-TV supports a range of applications from an individual trying out a garment in a TV advert to a group of users interactively attending a TV talent show with the convenience of staying at home. However, ways of utilising the proposed interactivity concept are endless and only limited by the imagination of inspiring content producers.

3D video analysis

Interaction with 3D video content

Interaction with 3D video content

One of the main emerging challenges of future multimedia platforms is the development of three-dimensional (3D) display technology, resulting in a plethora of research activities in the video research community focusing on this topic. This emerging technology is capable of bringing a whole new experience to the end user by offering a 3D real immersive feeling experience. However, research towards meaningful user interaction with the real 3D content is still at its early stages.
Having this in mind, the main aim of this research activity is to provide a comprehensive understanding and investigation about how to develop an interactive 3D video platform that delivers intuitive interaction with 3D video content. The key elements of the proposed platform include effective interaction with the content and the design of appropriate UI modality. Moreover, in order to specify the requirement for the designs, a number of studies into the implication of the 3D content delivery mechanism as well as the best user practices are being conducted.

Application-Aware Video Coding

Application-Aware Video Coding

Application-Aware Video Coding

Conventionally, video encoder is optimised for efficient bandwidth utilisation in video communications, where the distortion due to lossy compression is minimised given the affordable compressed data rate. However, video utilisation has evolved over the past decade, to video content-based industrial applications in other domains such as secu- rity and control systems. Similarly, in multimedia applications there is an increasing demand for content-based functionalities for video organisation and flexible access.
In real-time scenarios, these applications can exploit information embedded in the compressed video to fulfil the demand for efficient video content analysis. However, compressed-domain video analysis remains a challenge, because of sparsity and noise in the compressed features. This is due to conventional encoder implementation, lim- ited to optimising compression, which does not necessarily result in content descriptive compressed features. Compression efficiency is critical for optimum use of bandwidth and storage resources. On the other hand, other aspects of video utilisation such as video content-based applications would benefit from enhanced accuracy of content rep- resentation in the compressed video stream.
In order to achieve fast and reliable video content analysis, this thesis investigates alter- natives to conventional video encoding that would enhance the accuracy of compressed features, while maintaining compliance with the mainstream video coding standards. A generic Application-Aware Video Coding framework is proposed, which incorporates the accuracy of compressed features in parallel with rate-distortion optimisation criterion.
By considering encoder motion estimation for temporal prediction, the proposed frame- work was evaluated in three stages. A region-based video encoder optimisation criterion was developed, to identify and encode foreground regions using accurate motion data. The optimisation is steered by a hierarchical motion estimation based on intensity- gradients. This was then extended as a motion accuracy constrained rate-distortion optimisation, using spatial and temporal correlation of motion activity in the local neighbourhood, to accommodate multimodal motion.
Finally, an unconstrained optimisation model that combines Rate-Distortion and Motion- Description-Error was developed, leading to fully scalable implementation of the frame- work. A motion calibrated synthetic data set covering different scene complexities was designed to analyse the framework under known motion content. A mathematical model for Motion-Description-Error was derived as a function of optimisation parame- ters, scene complexity and encoder configuration. It is demonstrated that the proposed optimisation framework can reduce the extent of noise in estimated motion by 50%- 60%, without compromising on rate distortion performance or encoder complexity.