The Answers To Your Generative AI & Knowledge Graph Questions

Posted by Sean Martin on Aug 7, 2023 1:24:13 PM

We recently shared with the world how generative AI matched with knowledge graph technology (in our example via Knowledge Guru) opens a world of intuitive, self-service analytics which can be limited to precisely the data an enterprise wishes to make available to the given question submitter. The recent live demo was followed by a live Q&A session which hosted many great questions. In this blog post we answer all of your user submitted questions. 

Question 1

Will Knowledge Guru be available as a standalone application API or will it be an extension of Anzo only?

I think it will be an application and an API. Initially, we are only building it to support Anzo, but that’s because we have to get that out and into the market first. Beyond that, we will see what is in store for the long term. For the short term, it’s going to be an application and an API, so that customers who have their own user interfaces can embed this capability into their UIs. Knowledge Guru will manage the interactions with the LLMs, as well as manage the context.

social-ad-knowledge-guru-watchQuestion 2

Does it take SKOS ontologies or just OWL and RDF/RDFS?

SKOS is kind of metadata describing metadata. We honestly haven’t really experimented with that, because what we want is a very direct description of the instance data done as unambiguously as possible. Metadata describing your metadata would kind of be a step removed. I am not sure how well that will work. I think it may work in some cases, but it’s going to be more for the system to have to figure out. You already have to corral this thing into behaving itself, so why make it any more difficult than it already is?

Question 3

Given that you use OpenAI’s API, is data actually sent to their servers?

Yes, we use the OpenAI endpoints. We initially started using OpenAI’s endpoints and then when Azure announced their endpoints, we adapted it so that you can switch between the two. They’re essentially the same API, although there are little differences in the way you call the API’s. 

As for what is sent to the LLM, a read only service on the far side that you are pushing your submission into, which you then get back the generated text, it’s a one shot. It’s completely stateless. What we are sending in that, if it were being recorded, is a compressed ontology and a complicated prompt, which gives it all the instructions it needs to be able to deliver the Knowledge Guru experience. What’s coming back is the answer, the query, and a certain amount of binding instructions e.g. what cards to display it on. It’s all over SSL, but you do need to trust whoever’s hosting your LLM to not be recording the interactions.

Many of our customers already have a very long and deep relationship with Microsoft and are very happy to continue to consume compute from Microsoft, in the same way they do for SharePoint or any of the Azure services. However, you definitely are leaking information if someone is recording what’s going to and from the LLM. If that’s very important, then you need to start thinking about a completely private version of the LLM.

Question 4

Can we use/link external online ontologies with Knowledge Guru?

The short answer is you can’t link them, you can import them into Anzo. You can use Anzo’s Knowledge Graph system, it uses open standard, SPARQL, OWL,  RDF, we’re adding SHACL in the next release. You can import any ontology you like. 

As long as the ontology represents the domain accurately and the data accurately, you should get decent results.

Question 5

How are your chat queries converted to queries into your knowledge graph?

That’s the magic of the LLM. It’s been trained on clearly a lot of SPARQL. If you go to the public ChatGPT and you ask it to write a query, it will happily write you a SPARQL query. The trick is to get that SPARQL query to only be using the predicates describing your Knowledge Graph. We spent some months mastering that. The LLM transforms a human readable question into a SPARQL query. Beyond that, you are going to need to speak to somebody who understands the internals of the LLM as to how the actual transformation happens.

Question 6

Would the guru still work if the predicates were "hasPart" vs "has part" that is more close to natural language?

Knowledge Guru should behave fine. It really is very, very squishy. This is what’s so amazing about the technology! You think about our current traditional systems that are very defined by the interfaces, can be brittle and really only work in one way, and we test the heck out of them to make sure they work. Whereas this LLM technology is almost the opposite. It’s incredibly flexible. That’s great when you want it to be very liberal in how it turns the humans question, which doesn’t have to be stilted in any way. You can ask a question very naturally and it will attempt its best to map that down onto the ontology, and it does this really well. But sometimes it does too well, or does things that are unexpected, or does more than what you want, and that can upset you or upset the program on the client side. They really just wanted the question asked in a particular form. And if it isn’t in that form, then you often have to kind of nudge it and say, “Hey, please answer again,” but still remember the prompt that they told you, so it’s very liberal in terms of its translation. That’s actually phenomenal.

By the way, you can ask the question in pretty much any language or character sets, and it will answer. It will answer in the language the question was asked. That is quite amazing. Although when you look at the SPARQL query, the SPARQL query always looks like it’s in English and I think I instructed it to prompt to tell it’s a comment in English when you are looking at the query. 

Question 7

Do you leverage vector queries?

We don’t leverage vector embeddings at all in this current solution. Although we certainly will in the future, because we want to use the vector approach to bring in stuff from textual sources. We are currently creating SPARQL queries and executing them. That’s what you saw in the Knowledge Guru demo. However, I think the answers are going to be more interesting if the context on which the answer is generated is not only the result of a query, but also the results of a vector search or cosine search on the unstructured data. Anzo is a system that lets you bring in unstructured data and extract data from it. It’s going to be quite a small change for us to add the embedding process. The answers will be better, because you will be combining both real time structured data with unstructured data. 

Question 8

Are you feeding the Ontology to OpenAI?

Yes, we are, a highly compressed form of the ontology is sent in the prompt. OpenAI gets the model data and it gets the question you are asking. 

Question 9

Is this by default an on-premise installation? Can you give a rough direction of what a license would cost?

It isn’t on-premise. The client side can be on-prem, it could be anywhere. The LLMs we are currently using are in the Cloud and that may change. There’s a lot of work going on in open sourced LLMs, some commercially friendly ones too coming along. We are going to keep an eye on that space. For the moment, you can run the client side or Anzo, wherever you like, cloud, or on-perm, or even on a physical machine, but the LLM side is an API over the internet. 

We are not disclosing pricing publicly yet.

Question 10

Will Knowledge Guru learn from interactions? 

Not yet. The main reason for that is that these are still early days. None of the Open API Chat models allow you to do fine tuning. However, I can see a lot of advantages. We will almost certainly enable Anzo to capture feedback, which would allow the system to get better and better through end user feedback. So I think that’s definitely going to happen, but not yet. 

Question 11

How do you keep in-sync ChatGTP's data training, with Anzo's ontology which is feeding of growing DBs and new data sources?

We don’t have a problem with this. The prompt actually tells the LLM, which is generating the query, ”you may not guess or understand anything else about the database other than what I am telling you in this prompt.” The LLM has to create queries without making any assumptions about data in the system, it can only use what it can see. It sees the information about the ontology and we also allow it to use any results that it’s given, those are trusted sources. No other assumptions can be made about data in the system. 

If you use a different ontology, you will see a different Knowledge Guru operating in the same way as it worked against the tubes example, but in the domain of that other ontology. We don’t really care about the training, or anything that’s going on in the LLM, apart from the fact that it generates decent SPARQL and responds in a moderately predictable way. We don’t tend to use the LLM’s open knowledge at all. We constrain it always to create queries that are always going to be accurate and executable.

Question 12

Can a user application access both ChatGPT's and Knowledge Guru's APIs in a single solution?  Is there a good example of why I'd need to do this?

Yeah, I am sure you could. Although we haven’t exposed a Knowledge Guru API yet. But when we do, yeah, why not? You are writing an application that looks at two different API’s, you would be able to build something that does both. That it’s not what we are trying to do. We are creating something that’s very safe for our enterprise customers on their internal data. We don’t want to rely on the open training data of any of these GPT models, because I don’t trust them yet. It’s going to be a while. Until they can predictably say who founded Cambridge Semantics with 100% accuracy, I am not gonna be trusting them.

Question 13

That complicated prompt you are sending to the LLM is probably secret? That would be fascinating to see indeed.

It is for the moment, because we have spent so much time on it, but we may change that. The reality is the whole thing is moving very quickly. The model we are going to move to with Knowledge Guru is a model where customers will be able to extend the prompts, or add their own prompts. Individual cards will have their own prompts. There will be different parts of cards that will have their own prompts. Customers will be able to create their own cards and add the prompts for those. So I think it’s going to be a mixture of some of the stuff is proprietary, and a lot of it isn’t. But anyone who wants to spend a few months on this, can figure out what we figured out.

Question 14

Does Knowledge Guru do anything to be more efficient with GPT license tokens? 

Yeah, it does a lot. This is an interesting point. It brings in the cost, how widely can you deploy this, and so on. The costs keep dropping. We have to be looking at the Turbo 3.5 stuff now that they have added the completions API. That was one of the big reasons we moved to 4. One of the reasons is because Turbo was not very good at answering in JSON. The cost of Turbo 3.5 dropped 25%. 

We are doing an awful lot with context management–figuring out what to send back and what not to send back. There’s a lot. We use quite a lot of prompt.

Question 15

Have we tested Knowledge Guru with SHACL?

Not yet, but we will. I am sure you will be able to do something with SHAQL definitions. It’s just a question of finding the right way to express them in either the prompt or in the future fine tuning. Knowledge Guru is very loose and permissive so I am sure the SHACL descriptions or some kind of format derived from SHACL descriptions will work quite well. 

Question 16

Can one limit the vocabulary I want to train the LLM with to the domain of a knowledge graph and get more explainable answers from the LLM? I.e. Can I use the LLM in a regulated application under GxP in such a configuration?

Yes, you can. You are not really training it, though. It’s one shot prompting. That may change, and hopefully will change soon. You are able to give GPT instructions for it to not do things or things that it has to do. Turbo 3.5 just increased the prompt size to 16k, so that’s a lot more space. It’s double the space we are currently operating on. So I think you are going to be able to limit the vocabulary. But again, you are not really training the LLM. Not until you can do fine tuning. Technically training is where you actually build your own model, but I am not sure too many people are going to be up for that. They are more likely to fine tune an existing LLM or do one shot prompting or most likely a combination.

Question 17

How can I get started with Knowledge Guru?

The best way to get started is to reach out to our team here. We can provide an overview of the platform and discuss your use case. We are very happy to have discussions. If anyone wants to dive into more technical details, our team would be very happy to talk to you.

Generative AI and Knowledge Graph

 

 

 

Tags: Data Integration, Big Data, Ontology, Anzo, Graph, Analytics, Machine Learning, Artificial Intelligence, Graph Database, Knowledge Graph, semantic graph, data architecture, Large Language Model, LLM, Generative AI, Knowledge Guru

Subscribe to the Smart Data Blog!

Comment on this Blogpost!