![]() ![]() ![]() The current implementation of the specification focuses on audio capabilities. It also allows users to take advantage of Web Workers for threading and will work with getUserMedia to eventually support real-time manipulation of streams from microphones and webcams. For example, it relies on the MediaStream interface in the WebRTC specification. One of the characteristics that distinguishes the MediaStream Processing API from previous Web audio API proposals is that it aims to interoperate better with existing Web standards. It also shows how the APIs can be used for mixing tasks, like implementing a cross-fade between two audio streams, dynamically adjusting the volume of a video, and programmatically generating audio streams. The demos show how the APIs can be used to perform tasks like rendering a visualization of a video's audio track in a Canvas element while the video is playing. He has also published a set of demos (note: you need to run the experimental build to see the demos) that illustrate some of the functionality defined by the specification. Mozilla's Robert O'Callahan, the author of the MediaStream Processing API proposal draft, released experimental Firefox builds that include MediaStream Processing support. The specification is still at an early stage of development, but Mozilla has already started working on an implementation for testing purposes. ![]() Mozilla is drafting a proposal for a new Web standard called MediaStream Processing that introduces JavaScript APIs for manipulating audio and video streams in real time. ![]()
0 Comments
Leave a Reply. |