This is the third blog post in the Knowledge Graph Best Practices series. We put together this content to provide you with all the information necessary to have a successful knowledge graph project.
If new skills and roles are required to adopt knowledge graph technology, the barrier to adoption might be too great. So a good implementation abstracts the technical details, but still allows those savvy in graph operations to interact with the “raw” knowledge graph.
With effective abstraction, tooling, workflows and APIs, your knowledge graph solution is accessible, manageable and sustainable by contemporary users and developers. There is no need for new roles or skills.
When you assemble your team, focus on existing personnel and how they can support the knowledge graph paradigm. Depending on your implementation, roles such as Database Administrators will find that life is easier. Users and developers are more empowered, which reduces the burden on traditional Database Administrators.
Knowledge graphs built on RDF and OWL are resource-oriented, which means they lend themselves well to RESTful services architecture. In fact, the implementation should, or at least could, be a self-similar design, which means the software system that manages the knowledge graph is also built on RDF and OWL. This promotes high cohesion and consistency.
Trust and Confidence in Knowledge Graph Data
A new function arises from semantically integrated data: verification and validation. When multi-source data is used to create a knowledge graph, users and automated clients access information that is combined from potentially many sources. A decision maker cannot or will not simply act on this information unless he or she has confidence. And that confidence is enabled through verification and validation. In other words, your implementation needs to provide provenance for every field or property of data.
If you think about it, we can employ the current roles and skills to create and manage knowledge graphs; however, we are “inverting” the data access model from a “bottom up” process to a “top down” process. Yes, it’s true that we construct the knowledge graph from data sources, but that’s a necessity born from legacy approaches to data management. In the future, data will align directly to knowledge models and will inform decision makers and other consumers from an information view. The “data view” will support findings and conclusions. And don’t forget! The knowledge graph paradigm is ultimately for increasingly autonomous systems, which implies that humans have an important role in verification and validation. The “garbage in, garbage out” principle still applies.
What skills does your team bring to the table?
The next post in the Knowledge Graph Best Practices series will describe how best to prepare for setting up a knowledge graph project. Hint: data access. The Knowledge Graph Best Practices series kicked off with a post developing and nurturing internal support for your project, and the second post discussed how best to select your first knowledge graph use case. Another great resource for information like this is the O’Reilly Knowledge Graph Ebook.
- Building Momentum | Educating others about knowledge graph and getting support for the project
- Selecting Your Fist Use Case | Set up for success
- Assembling the Team | Required roles and skills
- Preparation | What to do before you begin to avoid delays