الساحر (٢٠٠١) - رضوان الكاشف ✩✩✩✩

http://www.elcinema.com/work/wk1010968

Movie with Mahmoud Abdel-Aziz, Menna Shalabi, Gamil Ratib, Sari El-Naggar, Mika. From my karass :-)

Information overload

Dealing with information overload: optimize the usage of signs.
Processing information means extracting it, transforming it, storing it and indexing it for later retrieval.
Indexing means referring to an information set via a key. This key is a sign to the original information.

Example:
I have two LAN cables connected to my home office's router on one end, and connected to two computers A and B in another room. I want to know which cable is connected to which computer. After conducting an experiment to determine the connectivity, I have a choice of how to store this new data in my information system called the house. For example:

  • I can write it in a piece of paper. This is the worst place to put it, because I will surely misplace that piece of paper and never find it when I need to answer this question, e.g. to choose which LAN cable to use for another task without disrupting computer A's connectivity.
  • I can write it in my notebook. Except I don't keep one.
  • I can label the cables. That sounds much more practical. Which cables to label, and where?

To make absolutely sure, I can label all 4 ends. However, I really don't need to label the computers' connectivity, since it's obvious which is computer A and which is B. So I can label the 2 ends on the router side. That works. But can I do better (i.e., less) ? I can only label one cable, the one connected to A, with "A". "A" is a sign to A. Using the rules of this very simple binary system, not-"A" is "B".

By strategically choosing the location of the index (to help with retrieval), I was able to bring down the number of signs to 1, instead of 4 candidates. That makes 3 less tags to worry about. Because signs are themselves new pieces of information, this makes less information to process.

NLTK experiments

Wordnet is a semantic database of English words. It includes word relationships such as synonym, antonym, hypernym (generalization) and hyponym (specialization).

It's fascinating to see the tree of life built into WordNet. Can other taxonomies be imported into WordNet and applied to words or phrases?

Of course, WordNet should really be loaded into neo4j for generic graph programming. It's been done.