Friday Session #4: Introduction to AVFoundation

Please note, this blog entry is from a previous course. You might want to check out the current one.

This weeks Friday session addresses the AV Foundation Framework. It can be found on iTunes titled “Introduction to AVFoundation (October 21, 2011)”.

In this lecture Salik Syed gives an overview on how the Objective-C interface can be used to access audio-visual media under iOS. After an high level discussion on the API he shows in a demo how to manipulate a video from the camera of an iPhone in real time.

The AV Foundation Framework can be used to examine, create and edit media files, to take input streams from a device and manipulate video during real-time capture or playback, or to combine various assets, re-encode and perform playback efficiently.

Often different APIs exist to do the same thing. In that case you should always use the highest possible level of abstraction as they are simpler and less error prone. Thus AV Foundation is not the best choice if a photo or video needs to be captured or to display them.

On the other hand AV Foundation is perfect for capturing and processing camera frames in real time, fine grained control over input devices like focus detection or exposure adjusting, or combining multiple sources or types of media.

Detailed documentation is available in the AV Foundation Programming Guide, the AV Foundation Framework Reference, or inside the Member Center.

The demo itself shows how to capture a video from the camera of the iPhone, use face recognition to detect a face on screen and manipulates the video by adding a pair of glasses on the face.

The code shown in the demo is available for download at Stanford.

FacebooktwitterredditpinterestlinkedintumblrmailFacebooktwitterredditpinterestlinkedintumblrmail

Leave a Reply

Your email address will not be published.