Streaming with .mp4
By Brent Harshbarger
Getting high speed Internet to your back door, commonly referred to as the last mile, is still the most problematic issue to widespread acceptance of broadband. Unless you are one of the lucky few to be located in a metropolitan area, you are still waiting and waiting to view, utilize or provide to your church all of the most dynamic internet content that is being created today.
There are several technical organizations that are spending significant money and time researching better ways around this lack of speed at the last mile. One of the most noted is the Moving Pictures Expert Group or MPEG. The newest technology developed to resolve many issues and to take streaming to the next level is MPEG-4. The development of MPEG-4 began in July 1993 and became an official standard in the beginning of 1999. It was at this time that many of the software and video game manufacturers started to incorporate this technology in their products. It has only been in the last year that the fruits of their labor have become commercially available.
So what is MPEG-4 anyway? Depending on whom you ask, you could get different answers. That is because MPEG-4 has a wide range of features. So the item or items that are important may be different from one person to the next. If you are an audio person, MPEG-4 supports higher sampling rates, which translates into higher fidelity sound while keeping the file size small. If you are a video person, it supports better quality video at smaller file size. If you want to stream the video, it provides the ability to stream it over a network. If you are a web or DVD producer it allows interactivity utilizing multiple media types.
Before we can fully appreciate what this new standard has to offer us, we need to have an understanding of the predecessors to MPEG-4. Many are familiar with the MPEG developments from using MP3 for music and MPEG-2 for making DVD (Digital Versatile Disk) a reality.
The first development by the MPEG group was MPEG-1, which became a standard at the end of 1992. The focus of this standard was a means to play video on the computer. Later, a derivative work arrived which is best known as MP3 (technically MPEG-1 layer 3). The next standard was MPEG-2, which focused on DVD and digital broadcasting applications. The MPEG-2 standard was approved in 1994. It was about this time an unforeseen technology was coming on strong: the Internet. The Internet developers, and users, were demanding video. This required better video compression techniques and the means to support it. In addition, the web applications started to become interactive. So to be successful, MPEG-4 had to consider all of these requirements while building in future compatibility to support HDTV. Each of these technologies requires different delivery methods, and, ideally, must be scalable so that the content could be consumed without concern of whether it was being viewed on an HDTV home theater system or a the tiny video screen on a wireless phone.
Below is a general breakdown of the MPEG Standards and the target bandwidth after compression, the resolution available, and the intended application.
These standards came about in the early 1990s due to the slow speed and high cost of transmitting and storing digital audio and video. Therefore, the MPEG group created common methods of taking the ones and zeros that make up digital information to reduce the size through mathematical means and compression. This method is also known as encoding. The benefit of encoding allows a file to be stored and transmitted using less hard drive space or bandwidth. Once the file or transmission is ready for consumption it is then decoded to look and feel like the original signal before it is compressed.
Most everyone has experience with this technology when zipping computer files. If you use a zip file you are using the same basic principle as MPEG, but zipping is used on a non-real-time file. So once you get Sunday’s Powerpoint presentation ready for your pastor, and you are ready to email it for him to review, you compress the file by zipping it down to a smaller file to reduce the upload time. After your pastor has saved the file to his hard drive he then unzips it or decodes it so that he can see your work as you created it.
To illustrate how MPEG technology works, let’s compare MPEG-2 to MPEG-4 in a non-linear editing application. Let’s say we have a video of our pastor talking and we would like to add graphics to show his name and title as he starts talking. So on the computer, we will drag a video clip (video object) onto our timeline and the title graphic (graphic object). We want the graphic object to fade in a view frames after the video starts and few seconds later fade out. If you are not using a real-time system, you it will need to render the two objects together frame by frame. Once it is done rendering it then plays it out (compositing) as video. We want to put the video on a DVD so we will encode it using an MPEG-2 encoder. Once the data is on the DVD and we are ready to view the video the DVD player will then decode the MPEG-2 stream to video.
MPEG-4 is different because the object rendering and compositing is done at the decoder, and not before the encoding process like the technologies before it.
MPEG-4 can use several technologies independently or aggregately. This can be accomplished because MPEG-4 provides a new approach by which we can deal with several types of different media independently by referring to them as objects. An object can be a still image, text, video, audio, and they can be 2D or 3D. These objects can be real, such as something recorded with a video camera, or they can be synthetic, like a 3D animation created on a computer, or both. Objects are independent of each other and are referenced by a scene and a time frame.
MPEG-4 provides several technologies that can be broken down into categories, which include Compression Layer, Sync Layer, and Transport Layer. These three layers make MPEG-4 extremely flexible.
The Compression Layer provides many services; this includes descriptions of the scene, object management and finally compression of the objects. The key here is that the files are compressed independently.
The Sync Layer puts a timestamp on the objects to keep them all synchronized with each other. This keeps all of the audio, video and other objects synced together so they arrive together at the right time and place on every play back, just as the producer designed it, or as the consumer requested it (interactive application).
The Transport Layer provides several methods for transmission and storage. The wide variety of support for transport includes UDP/IP, which is a protocol for the internet, ATM (Asynchronous Transport Mode), which is used in private and public WANs (Wide Area Networks), and can be transported on telephone networks both wired and wireless using H.223 standard and can be stored on a hard drive as a .mp4 file. In addition, MPEG-4 data can be transported via MPEG-2 for use in satellite applications.
The benefits provided by this three-layer approach allows the media to be scalable and provides a way to support interactive applications such as distance learning, online video games, and video-on-demand. This is a very different approach from MPEG-1 and MPEG-2 because those technologies create a composite or flat file, encode it and then making it available for distribution. Once the encoded data gets to the destination it can only be decoded and played.
MPEG-4 works a bit differently by keeping all of the objects separate, and distributes them by the best method. Once the data arrives at its destination, the data references the scene and instructions available and then creates a composite for viewing or consumption.
As you can see MPEG-4 is quite a departure from earlier MPEG work. MPEG-4 expands the capabilities of how we are able to use media. A video can be created by a missionary and compressed and sent over a phone network to an FTP site at a sponsoring church. The video can then be decoded and played out and projected in full video resolution on Sunday morning.
Churches that create their own curriculum can create educational material that can be stored on DVDs for playback at home, streaming on the internet, or can provide distribution of the material on the churches LAN. This material can have interactive teaching and testing material on it. It can include audio, video, 3D graphics of geographic locations areas being discussed that you can “walk” through via interactive maps or animations as you progress through a lesson.
The possibilities are only limited by the imagination, as well as the technical skills available. There are several new tools that will be required, some hardware, but the amount of software required has grown significantly. Making the most of everything available will require several dedicated people with different talents working together to create interesting media to communicate the message of Christ and to better educate His disciples. Let’s not forget that these talents require tools. The churches that will support these ministries properly by providing them education, the right tools and a spiritual support system will have a great harvest. Read Part 2
Brent Harshbarger is the Founder of m 3tools a company that specializes in multimedia technologies for ministry. Brent can be reached via email firstname.lastname@example.org.