Προσαρμόστε το μέγεθος της οθόνης
Resize out
Resize in
Resize Επαναφορά
Done Έγινε
How Are Knowledge Graphs Built?

How Are Knowledge Graphs Built?

Παρακολουθήθηκε 1,179 φορές

Δεν είμαι ρομπότ

8.0
Ευχαριστούμε, η ψήφος σου καταγράφηκε και θα δημοσιευθεί σύντομα.
Ναι
Όχι
Περιγραφή βίντεο:

Get an exclusive look at how the world's largest knowledge bases are constructed, from our very own Marsal Gavaldà.

TRANSCRIPT:
Another key aspect of the Knowledge Graph is that it's being constructed automatically. If you look at the earlier attempts at creating knowledge graphs, such as the CyC Knowledge Base from Cycorp in the 1980s, they attempted to catalogue all human semantics and common sense knowledge by hand. They soon hit a plateau in terms of coverage because it's very difficult to scale up, if for every concept you have to sit down and try to decide, for example, whether a certain property of an object is intrinsic or extrinsic, or if you need to catalogue all the different meanings of the word "place." Actually the limitation of these earlier, hand-constructed knowledge graphs, or semantic networks as they are also called, reminds me of the paradigm shift that occurred in machine translation.
Since the 1960s there were attempts at creating programs that would translate one language into another, say a paragraph written in English to its equivalent in Spanish. Those early systems employed hand-crafted rules that would say, for example, "adjective noun is to noun adjective," because in Latin languages you don't say "green book," you say "libro verde" in Spanish (and Italian!) or "libre verd" in Catalan, "livre vert" in French, "livro verde" in Portuguese, etc., the point being that Romance languages place the adjectives after the noun they modify, which is the opposite order than Germanic languages like English. But the problem is that these hand-crafted rules soon become unmanageable, overly complex, and they still fail to cover all cases. Such is the fluidity of human languages.
So what was the big breakthrough in machine translation? The availability of big data, in the form of large collections of parallel texts in two or more languages, and a totally different approach to the task: rather than writing rules by hand, applying statistical models that learns how certain words and features in one language map onto another. These systems that are trained automatically improve as more examples are fed into them. They're not perfect, but are very useful and can be easily adapted to new domains and new languages.

Σχόλια