Last updated for Ambulant version 1.8.
This section attempts to explain the basic structure of the Ambulant Player by loosely explaining what happens when you run it and play a document.
There is an informal overall structure diagram that tries to put the whole design in one pretty picture and may be worthwhile to keep handy while reading this document.
The main program is platform dependent and GUI-toolkit dependent. The details of this main program are skipped here (and they can actually vary quite a bit for the platforms we support), but at some point after the program has started the GUI is put on the screen, with the usual set of menus for Open, Play, etc.
When the user selects Open (or Open URL, or double-clicks or drags a document) we need to get the data, parse the document into a DOM tree and create a player to play that DOM tree. In addition, we need to tell the player how it can obtain media data, create windows, and more.
Most player implementations (the Windows player is an exception) have a class with a name like mainloop to handle this. Such a mainloop is created per SMIL document. Actually, Ambulant provides a class gui_player which can be used as a skeleton for such a mainloop class, handling most of the bookkeeping sketched below.
The mainloop object should first create the various factories and populate them:
Next init_plugins is called, and if the architecture supports dynamically loadable plugins we get the plugin_engine singleton object and ask it to load the plugins. This will search the plugin directories for dynamic objects with the correct naming convention, load them, and call their initialize routine. The factories object (another interface usually implemented by gui_player) and gui_player object are passed to the initialize routine, so the plugin itself can register any factories it wants. Additionally, a plugin could modify the gui_player to allow it to get controllater, during playback of the document.
The next step is to create the DOM tree. One way to do this is to use read_data_from_url to read the data from the document, and then pass this data to document::create_from_string. This will return a document object. This object contains the DOM tree itself (implemented by the node object) and some context information (XML namespace information, original URL for resolving relative URLs used in the document, a mapping from XML IDs to node objects). There is a convenience function create_document that does all this for you.
The final step is to create a player object. This is done through create_smil2_player, passing the document, the factories and one final object, embedder. This object is again implemented by the main program, and implements a small number of auxiliary functions, such as opening an external webbrowser or opening a new SMIL document.
When the smil_player object is created it gets the document, factories and embedder arguments. It now needs to create its internal data structures to facilitate playback later on:
When the user selects Play we call the start method of the player object. This will invoke start on the scheduler. This will start playing the root node of the tree. The scheduler will now do all the SMIL 2 magic, whereby events such as the root node being played causes other nodes to become playable, etc.
At some point a media item needs to be rendered. The scheduler calls the new_playable method from the global_playable_factory. This will pass the DOM node to the various factories until one signals it can create a playable for the object. In addition, if the playable has a renderer (which is true for most media objects, but not for things like SMIL animations) we also obtain the surface on which the media item should be renderered, through the layout_manager. We then tell the renderer which surface to use.
Soon afterwards the start method of the playable is called to start playback. An average renderer will need to obtain data from some URL. It will do this by creating a datasource for the document through the datasource_factory object. Every time the renderer wants more data it calls the start method of the datasource passing a callback routine. Whever data is available the datasource will schedule a call to the callback routine, through the event processor. When the renderer has enough information to start drawing it will not actually draw immedeately, but it will send a need_redraw call to its surface. This will percolate up the surface hierarchy, to the GUI code, and eventually come back down as a redraw call all the way to the renderer (assuming it is not obscured by other media items, etc). At this point the bits finally get drawn on the screen.
Whenever anything "interesting" happens in the renderer (the media item stopped playing, the user clicked the mouse, etc) it invokes a corresponding method on its playable_notification. This interface is implemented by the scheduler, and these notifications are how the scheduler gets informed that it can start scheduling new things, etc.
If you haven't already done so, a good place to continue reading is the Overall design document, which gives an overview of the design principles and explains some of the choices made. Then continue with the objects document which describes the main objects in more detail. Or go back to the main documentation index.