Sample programs

The directory src/main/simple_examples in the tarball contains some example programs to illustrate how to call the library.

See the README file in that directory for details on what does each of the programs.

The most complete program in that directory is sample.cc, which is very similar to the program depicted below, which reads text from stdin, morphologically analyzes it, and processes the obtained results.

Note that depending on the application, the input text could be obtained from a speech recongnition system, or from a XML parser, or from any source suiting the application goals. Similarly, the obtained analysis, instead of being output, could be used in a translation system, or sent to a dialogue control module, etc.

int main() {
  string text;
  list<word> lw;
  list<sentence> ls;

  string path="/usr/local/share/FreeLing/es/";

  // create analyzers
  tokenizer tk(path+"tokenizer.dat"); 
  splitter sp(path+"splitter.dat");
  
  // morphological analysis has a lot of options, and for simplicity they are packed up
  // in a maco_options object. First, create the maco_options object with default values.
  maco_options opt("es");  
  // then, set required options on/off  
  opt.QuantitiesDetection = false;  //deactivate ratio/currency/magnitudes detection 
  opt.AffixAnalysis = true; opt.MultiwordsDetection = true; opt.NumbersDetection = true; 
  opt.PunctuationDetection = true; opt.DatesDetection = true; opt.QuantitiesDetection = false; 
  opt.DictionarySearch = true; opt.ProbabilityAssignment = true; opt.NERecognition = NER_BASIC;   
  // alternatively, you can set active modules in a single call:
  //     opt.set_active_modules(true, true, true, true, true, false, true, true, NER_BASIC, false);

  // and provide files for morphological submodules. Note that it is not necessary
  // to set opt.QuantitiesFile, since Quantities module was deactivated.
  opt.LocutionsFile=path+"locucions.dat"; opt.AffixFile=path+"afixos.dat";
  opt.ProbabilityFile=path+"probabilitats.dat"; opt.DictionaryFile=path+"maco.db";
  opt.NPdataFile=path+"np.dat"; opt.PunctuationFile=path+"../common/punct.dat"; 
  // alternatively, you can set the files in a single call:
  //  opt.set_data_files(path+"locucions.dat", "", path+"afixos.dat", 
  //                     path+"probabilitats.dat", path+"maco.db", 
  //                     path+"np.dat", path+"../common/punct.dat", "");
  
  // create the analyzer with the just build set of maco_options
  maco morfo(opt); 
  // create a hmm tagger for spanish (with retokenization ability, and forced 
  // to choose only one tag per word)
  hmm_tagger tagger("es", path+"tagger.dat", true, true); 
  // create chunker
  chart_parser parser(path+"grammar-dep.dat");
  // create dependency parser 
  dep_txala dep(path+"dep/dependences.dat", parser.get_start_symbol());
  
  // get plain text input lines while not EOF.
  while (getline(cin,text)) {
    
    // tokenize input line into a list of words
    lw=tk.tokenize(text);
    
    // accumulate list of words in splitter buffer, returning a list of sentences.
    // The resulting list of sentences may be empty if the splitter has still not 
    // enough evidence to decide that a complete sentence has been found. The list
    // may contain more than one sentence (since a single input line may consist 
    // of several complete sentences).
    ls=sp.split(lw, false);
    
    // perform and output morphosyntactic analysis and disambiguation
    morfo.analyze(ls);
    tagger.analyze(ls);

    // Do whatever our application does with the analyzed sentences
    ProcessResults(ls);
    
    // clear temporary lists;
    lw.clear(); ls.clear();    
  }
  
  // No more lines to read. Make sure the splitter doesn't retain anything  
  ls=sp.split(lw, true);   
 
  // analyze sentence(s) which might be lingering in the buffer, if any.
  morfo.analyze(ls);
  tagger.analyze(ls);

  // Process last sentence(s)
  ProcessResults(ls);
}

The processing performed on the obtained results would obviously depend on the goal of the application (translation, indexation, etc.). In order to illustrate the structure of the linguistic data objects, a simple procedure is presented below, in which the processing consists of merely printing the results to stdout in XML format.

void ProcessResults(const list<sentence> &ls) {
  
  list<sentence>::const_iterator s;
  word::const_iterator a;   //iterator over all analysis of a word
  sentence::const_iterator w;
  
  // for each sentence in list
  for (s=ls.begin(); s!=ls.end(); s++) {
    
    // print sentence XML tag
    cout<<"<SENT>"<<endl;
      
    // for each word in sentence
    for (w=s->begin(); w!=s->end(); w++) {
      
      // print word form, with PoS and lemma chosen by the tagger
      cout<<"  <WORD form=\""<<w->get_form();
      cout<<"\" lemma=\""<<w->get_lemma();
      cout<<"\" pos=\""<<w->get_parole();
      cout<<"\">"<<endl;
      
      // for each possible analysis in word, output lemma, parole and probability
      for (a=w->analysis_begin(); a!=w->analysis_end(); ++a) {
	
        // print analysis info
        cout<<"    <ANALYSIS lemma=\""<<a->get_lemma();
        cout<<"\" pos=\""<<a->get_parole();
        cout<<"\" prob=\""<<a->get_prob();
        cout<<"\"/>"<<endl;
      }
      
      // close word XML tag after list of analysis
      cout<<"</WORD>"<<endl;
    }
    
    // close sentence XML tag
    cout<<"</SENT>"<<endl;
  }
}

The above sample program may be found in /src/main/simple_examples/sample.cc in FreeLing tarball.

Once you have compiled and installed FreeLing, you can build this sample program (or any other you may want to write) with the command:
g++ -o sample sample.cc -lmorfo -ldb_cxx -lpcre -lomlet -fries -lboost_filesystem

Check the README file in the directory to learn more about compiling and using the sample programs.

Option -lmorfo links with libmorfo library, which is the final result of the FreeLing compilation process. The oher options refer to other libraries required by FreeLing.

You may have to add some -I and/or -L options to the compilation command depending on where the headers and code of required libraries are located. For instance, if you installed some of the libraries in /usr/local/mylib instead of the default place /usr/local, you'll have to add the options -I/usr/local/mylib/include -L/usr/local/mylib/lib to the command above.

Lluís Padró 2010-09-02