poetry as data compression

A poem is a metaphor, and it strikes emotion because we call on on our own knowledge to understand it. We know what the words mean and thus glean a meaning from a small group of words that far surpasses the number of words used. In this sense, poetry is the most efficient form of data compression. In other words, if a computer had read shakespeare, it could explain the meaning behind a letter or an essay with just a few bytes, locations in the collective knowledge corresponding to passages that had a universally accepted meaning, and this would be the ultimate data compression algorithm, similar to a quote or a reference, where the reader is compelled to refer to another work. The more the computer had stored in it's database, the more it could compress, and the different levels of data that each computer was at would facilitate different interpretations of the original data-compressed text. This would eventually create subjective and existentialistic data, where each computer would have a slightly different interpretation, and each time the theoretical text file was compressed and expanded, it would be tailored to the singular knowledge of the machine. It would become impossible for a computer to really know what the other had in the data file because of the data compression, unless the two machines shared databases, and thus knew each other and could interpret exactly each other's compression algorithms. However, it would be possible to interpret several machine's versions of the same file and thus gain a fairly accurate version of the file in question, as well as the different compression algorithms of the individual machines. This could also then be extended to picture and application data, where a program does like others, or one picture is similar to another. To make data expansion more efficient, however, all the computers could be compelled to accept identical meanings for certain passages in the reference database, and not deviate or obtain degraded versions of the text file, so that it's compression algorithm would be the same, thus eliminating deviated versions of the theoretical text file that had been compressed and expanded so many times that it's meaning was completely unlike the original. The text file would however have to be released in original form periodically, in order for machines to reinterpret it more and more accurately. This way, new versions of the old text file would not be created, and the original idea would remain singular and unchanged. For each differently sized database, interpretation and compression algorithm, data compression would approach a individual maximum efficiency limit.

[the moon] [INDEX]