“A lot of people think an algorithm has to be done in code, it has to be something that is found in a computer but it can also be a process that you’re doing, maybe even semi-manually, that’s still an algorithm.“
– Kais Dukes
About Kais Dukes
Kais is a leading AI scientist and the CTO and Chief Scientist of Hunna, which combines AI and medical experts for large-scale preventive screening. Kais and his co-founders are currently in the news for having appointed an AI as CEO of the company, which we will discuss in depth in this episode. He is a PhD in AI and a strong background financial tech leadership, is also well known for applying AI to the Quran, developing the Quranic Arabic Corpus.
What you will learn
- Practical application of the “Hive Mind” concept (04:11)
- Leveraging AI CEO’s strengths while compensating for its limitations (06:58)
- Using algorithms to let executives focus on their strengths and passions (08:39)
- Overview of the multifaceted role of the AI CEO in decision-making (09:48)
- Illustration of the algorithm’s scoring functions with a practical example (12:54)
- Introduction of several simple decision-making systems (14:55)
- Unveiling the benefits of AI-Human partnerships in decision-making (17:11)
- Summarizing the intricacies of the “hive mind” decision-making approach (18:36)
- Harnessing collective intelligence with quantitative and qualitative models (22:13)
- Comparing the evaluation of an AI CEO to a human CEO from a business standpoint (25:10)
- Addressing concerns such as hallucinations and biases in AI systems (29:44)
Ross Dawson: Kais, it’s awesome to have you on the show.
Kais Dukes: Hey Ross! It is a true pleasure to be here. Thank you so much for inviting me here.
Ross: There’s been a lot of discussion about the AI CEO that you have created and deployed. We’d love to learn more about it. Can you give me a bit of the backstory of how it is that you came to work on this project and develop it?
Kais: It’s in the news right now. If you Google “First AI CEO Europe”, we do indeed come up. It’s been quite interesting. We’ve had a large number of interesting responses. I actually thought something quite different was going to happen, that there was going to be a lot of skepticism, people are going to be saying really? You guys got an AI CEO. But the way we did the press release, we actually also announced it in combination with a science research paper that we published on Archive. We’ve been very transparent about the algorithm and the process.
We’ve also been quite honest about what we’re doing. When we say an AI CEO, a lot of people who watch Hollywood might be imagining a robot that’s sitting in a boardroom, telling a bunch of executives what to do. But it’s actually not like that at all. It’s an algorithm, it’s a process. We can go into more detail about how it works in a minute, but briefly, it’s an algorithm and a process that involves humans as well as machine learning systems to come to a joint decision. We have a nickname for this, we call it a “Hive Mind”. But from a scientific perspective, this is more coming from collective intelligence.
The idea is can we get a group of smart people together, a group of smart AIs, and a group of machine learning systems together? Can they come together to make a joint decision? Just to let you know where we got the inspiration for this, I’ve always been a big fan of Steve Jobs. Though I don’t agree with everything Steve Jobs has said, there’s one specific quote he said which is always stuck in my mind. He said ‘We don’t hire smart people and tell them what to do, we hire smart people so they can tell us what to do.’ That always really stuck in my mind.
Because right now, we’ve got a mindset, which is that AI is a tool, it’s going to be humans calling the shots, and we’re going to use AI to help us out. I think that’s great. I think that’s always going to be there. But as AI systems get smarter, could the tables be turned? Could it also be that maybe, we’re also listening to what they have to say? Using that quote as inspiration, we really ran with that and we thought, can we try to implement this?
Ross: AI CEO. A CEO does quite a lot of different things. They talk to the media and investors, make decisions, and inspire people. Is there any subset? Is this specifically around decisions? Or are there other aspects of this?
Kais: That’s a really good question. This is not a robot. This is a software system, an algorithm. If you’re going to put an AI in that executive position, let’s get real, it’s probably not going to be exactly what a real traditional human CEO does. We’re going to have to bend the term a little bit. I can tell you what the system can and can’t do.
Kais: Right now, today, the AI CEO is not going to stand up on a company call, a Zoom call, and give a motivational speech to all of the staff, it’s not going to do that. But what it will do is the system will produce very good, very solid strategic advice. Imagine you’ve got a CEO who’s working from home, and she says, ‘Oh, I can only respond via email for the next couple of weeks, my webcam’s not working, I’ve got a sore throat, I can’t get on the phone.’ Now, is that really going to be a very effective CEO? For a lot of things, it can be. But in reality, what this has meant is that the other executives in the company, I’m the CTO, we have the COO, we’ve had to step in and do things that a traditional CEO might have to be doing – more direct communication with our stakeholders, with the staff we work with, with our partners, and more focus on setting the company culture. If you’re going to have a software system in an executive function, you’re going to have to do things in a bit of a different way. But it can still work.
Ross: This is a real need as in you have a company that is doing well and growing, you have a CTO and COO, but you don’t have a CEO so you have chosen to make that an algorithm.
Kais: We’ve chosen to make it an algorithm. The takeaway message is the other executives in the company have to step in and maybe do things that a traditional CEO would be doing. But it also means that we can then also focus on the things that we’re good at as well. For example, the COO loves sales and talking to people, and he loves the human aspect of things, and having more of the strategic guidance coming from an algorithm means he’s freed up to focus and do the stuff that he really enjoys doing and loves it. We definitely had a real need for it. For it to work, you have to reassess a little bit about what these roles mean and bend the traditional definition of CEO.
Ross: In terms of what the AI CEO is doing, what are the specific functions? Can you give me examples of decisions made or things that have been put on its plate and it’s been able to respond usefully?
Kais: Fundamentally, it’s an algorithm. The way the algorithm works is you give it a goal and it produces the best possible plan it can to help realize and achieve that goal. You could give it a high-level goal. For example, a high-level goal could be we need a strategic plan for Q3, given our resources, to help us achieve our objectives. Or you could give it a bit of a lower-level goal. For example, we recently had the AIC announcement, we put it up to the AI CEO, how do we handle this? No joke, one of the things that the AI CEO said is to get on some podcasts. I think that stuff works very well.
If you need some strategic advice, you can put those questions to it, and you get responses. We’ve actually been really amazed. Before the advent of modern AI, if we wanted a marketing strategy, being a small company, we might contact a marketing agency, and we might try to work with them. I’ve done that before, it can take weeks, and a bit hit and miss some of the time what you get out. But we’re just amazed. Now, in just a very short amount of time, we can get a marketing strategy, which actually feels pretty solid.
For things like that, it works very well. For things that it doesn’t work well is where there is a bit more emotional understanding required. For example, we’ve tried to get a bit more strategic advice from the AI CEO on how to handle things with sales. The advice hasn’t totally been great, dealing with the human factor. For us, our approach is, we don’t want to make this fully automated. We think that having humans in the loop is really important. We review carefully every decision. We don’t just follow this thing blindly. It’s a system. So far, we’ve been following it around 90% of the time. But keeping humans in the loop is very important.
Ross: One of the important things around this is where specifically you place the human in the loop. Is it in terms of you getting the output from the AI and then you approve or vet or refine what it is doing, or do you feed that back then for further things? Or how specifically are you architecting the human in the loop together with the AI?
Kais: That is an excellent question, Ross. We’ve outlined this in our science research paper. I know you’ve got quite a smart audience for this podcast, and I know normally the audience is pretty switched on so I’ll just go into some details for a couple of minutes.
I’ll try to keep it short, and not too technical. The algorithm we follow actually sounds really simplistic but don’t be fooled by a simple algorithm; sometimes a simple algorithm can actually be quite powerful. What we do is we have a goal that we’re trying to achieve and we’re trying to produce a plan. The first thing we do with the system is come up with a set of up to three scoring functions – three criteria. For example, recently, we were discussing how to handle the AI CEO announcement, we came up with three scores, which are a marketing score, an effort score, and an impact score.
The idea is the scores go between zero and 10. We also allow half points like 7.5. Very explainable and easy to do. The way the system works is it works in iterations with a feedback loop. The humans and the AI are collaborating, and they come up with a draft plan. We look at the draft plan, and we score it from zero to 10, on the different criteria. Then collectively, the humans and the AIs who are working on this together look at the plan, and we say, let’s now make a bunch of adjustments, see if we can move those scores a little bit, make some edits, revise the plan, and score it again. We do that through a few iterations until we feel we’ve actually got something quite solid. Now you might be looking at it and thinking, is that it? That just sounds so simplistic, right?
Ross: No, that’s fantastic. I wasn’t aware there was an Archive paper which I’ll definitely have to have a look at. But to be frank, what you’ve just described, sounds potentially as innovative as the algorithm itself, in terms of being able to get effective humans in the loop and iteration cycle.
Kais: It sounds really simplistic but let me tell you that there are some very simplistic decision-making systems that are actually quite powerful. As I’m sure you’re aware, one of the famous ones is the Eisenhower Matrix, based on President Eisenhower. President Eisenhower, obviously a very smart man, was head of the United States of America. He used to get bombarded all day long with stuff he had to do. He’s like, look, I’ve got a simple system that works for me. I’m going to assess all this work in terms of just two factors – urgency and importance.
That gave him a bit of a quadrant, you’ve got four combinations, he would do what’s important himself personally, starting with what’s urgent. If it wasn’t important, he would delegate. If it wasn’t important or urgent, he would maybe put it in the trash bin. He also had his two scoring functions, he had an urgency score and an importance score, he was doing it qualitatively, not quantitatively, but the same principles apply. Sounds like a really simplistic system but that simple-sounding system is actually how he ran his presidency and got stuff done.
Another really good example of a decision matrix system like that is in business, you’ve got the Iron triangle, which is if you’re doing a project, maybe you care about the cost, quality, and speed, you can’t have all three. Maybe you’ve got different options for how to do the project, you’re thinking about scoring these, or you’re thinking about the right solution. Another really good example in business is the impact-effort decision matrix, you draw a grid with impact and effort, and you try to put pins on different solutions where these are good.
With all our business experience, we realized we can assimilate all these different decision-making matrices into a universal framework, but let’s just come up with a framework that can take any kind of scoring criteria, based on the problem at hand, keep using the loop, and keep things iterative. What we found is really counterintuitive. It sounds like such a simple idea. But actually, there are a lot of benefits. First of all, it’s collective intelligence, you can get AIs and humans working on this together. It’s cognitively simple to understand. Humans are good at scoring stuff out of 10, like movie ratings, I gave that movie a seven; there are all kinds of things people score, maybe I shouldn’t talk too much about that. But for example, I work out sometimes, I look in the mirror, and I’m scoring myself around a three out of 10 right now, I need to do a bit better. We’re really good at scoring things so intuitively it makes sense. It also really helps with explainability because once you start getting AIs involved in this, you get a really clear picture of what’s going on. We’re really amazed. It’s a really simple process but we think is really effective.
Ross: It’s really interesting. You just keep on describing it as an algorithm, is it a large language model?
Kais: It’s a good question. We look at it as a hive mind. We’re happy to mix in all sorts of AIs and all sorts of decision-making tools. Yes, we do use large language models. We’ve connected three large language models together. Specifically, we’ve got OpenAI, ChatGPT4, we’re using Google Bard, and we’re using Anthropic Claude2. We’re using some human experts. We’re also using some very specific machine-learning models and statistical models that we’ve had quite a lot of success recently with, RNNs, we use BiLSTMs occasionally, Long Short-Term Memory, so whatever makes sense for the problem.
The idea is, each of these components is producing outputs, producing suggestions on how to improve the plan and we look at all this together and we include what we feel makes sense. The reason it’s an algorithm is we’ve got three scores, but then we need an overall score for the plan. What we do is we do something mathematically called a weighted sum, a very simple and standard approach for multi-parameter optimization. We just assign, for example, if we’ve got three parameters, three scores, we might say, the compliance score is actually quite important for this problem, we’re going to give that a high weight, apply a weighted sum, we get an overall score for the plan. It’s an algorithm because we’re following a set of steps. A lot of people think an algorithm has to be done in code, it has to be something that is found in a computer but it can also be a process that you’re doing, maybe even semi-manually, that’s still an algorithm.
Ross: Are you fine-tuning any of the large language models, GPT4, or others?
Kais: Only very rarely, when we’ve got a specific niche problem that we need to look at. But generally, we found if you take off-the-shelf models and actually combine them together, you get a really good result. Let me just give a very simple analogy for some of the viewers and listeners here. If you ask someone to estimate how many coins there are in a jar, imagine that you’ve got a jar full of coins, and you’re asking one person how many coins do you think there are in that jar. They might say, I don’t know, 300. But if you actually get 10 people to estimate how many coins there are in the jar, and then you aggregate those results, it’s really interesting, you can actually get a really accurate result.
Ross: That’s quantitative aggregation, though, in terms of if you’re getting qualitative, as in text-based answers from the large language models, just wondering how you’re able to then combine those into creative superior.
Kais: That’s a really good question. We’ve got a little bit of secret sauce here, which we probably don’t want to disclose too much about. It’s a bit of one of our competitive advantages. We’ve actually found a way to get numerical answers out of the large language models. You make an excellent point because we’re dealing with qualitative problems but yet we’re using quantitative scores out of 10. We’ve got a bit of a secret sauce.
Ross: So it’s around framing the questions in ways that can be quantified, essentially.
Kais: Yes, it’s partly about that.
Ross: You also mentioned that you also use some machine learning models so this would be with internal data, I presume, in the company to be able to form?
Kais: One of the really good things like that is when we’re trying to forecast which market we should be looking at, we apply a lot of quantitative models. As a startup, we’re currently focused on the UAE, we didn’t pick the UAE by chance. We modeled it. We applied quite a lot of quantitative data and things like that. I think quantitative models can work quite well if you’re dealing with things like GDP, you’re looking at how much spending power they have in the country, or what are their investment levels. You’re looking at things like future projections on growth within the country, you have a lot of quantitative data. In things like that, quantitative modeling can work quite well. But yes, you’re absolutely right. The benefit and the power of what we’re doing is we’re not just restricting ourselves to one sort of model. It’s really about the collective intelligence aspect, mixing all together, mixing quantitative models with qualitative models as well.
Ross: We talked about the hive mind of collective intelligence. You’ve got the models. There’s you, the COO, are the other human participants in the system all internal to the company or do you go outside to get any other participants in that decision-making?
Kais: A very good question, Ross. As a health tech startup, we do quite a lot of work with the medical community. We have partnered with a couple of senior medical professionals, one of whom is working at the World Health Organization, who’s been assisting us. They’ve also been participating in the process. It wasn’t easy to get them on board with it because we had a certain framework that we were asking them to follow. But once they got involved, they were really fascinated by it. They thought wow, this is quite cool. We’re working together with AIs, and it’s a structured approach. We’re trying to produce a plan, we’re scoring the plan, and we’re working on it iteratively. Once we got into the swing of things, we found it was very effective. So no, definitely not just internal. Our view is a system like this really works best when you’re combining experts together, we feel. The experts could be the AI systems, and definitely bringing in human expertise as well is what makes it very effective.
Ross: What are the next steps? You have your AI CEO, and you’ve got a strong structure methodology there. How do you then either deploy it more broadly, improve the quality of it, take the next steps, where to from here?
Kais: There are two ways to look at that question. There’s the business side of it, and then there’s the science side of it. From the business side, we need to measure the performance of the system in the same way you would measure the performance of a human CEO. The question is, if you had a human CEO, and you were the board of directors, who typically sits above the CEO, the board might be thinking, what is the CEO doing? Let’s look at the results. Has the CEO actually delivered on our strategic objectives?
For example, as a health tech company, our main strategic objective right now is to secure a medical pilot to prove that our product is working and to get a large number of patients onto the pilot. We’ve got some strategic objectives. The next step is to make sure that the algorithm and the approach we’re following is delivering the results we want as a business. That’s how we would measure things from a business perspective. We feel there’s no difference in having an algorithm or a human in a CEO position with regard to measuring its effectiveness; you would just measure it traditionally. Is it delivering the results you want?
From a scientific perspective, we’re in a very fascinating era right now because the large language models have come out quite recently and have revolutionized things, who’s to say what’s coming up around the corner? There could be another big step, a big leap coming up at any minute. What we’re really keen to do is to improve the collective intelligence with the latest AIs that come out. Immediately, as soon as another strong AI system is out there, we’re going to be jumping all over it and we’re going to try to assimilate it. I don’t want to put people off, I’m sounding like the Borg from Star Trek right now, we will just assimilate all this other intelligences but I think that’s really the plan. As soon as another state-of-the-art AI system is available, we would love to also incorporate that into the collective decision-making process.
Ross: Since you already have multiple intelligences, artificial and human involved in the system, then presumably, it’s not difficult for you to bring in another participant in that model.
Kais: Absolutely. We’re looking at the system as a Hive mind, a mixture of experts, and we would love to continue to bring in additional AIs into the decision-making process.
Ross: (Check Sentence) In terms of the process of the overarching algorithm to bring all these together and the ways of measuring results to be able to improve that, any types of multi-scenario testing or other ways in which you’re refining the overarching algorithm?
Kais: We’ve actually been running the system for about 12 months. For the last 12 months, we’ve been looking at the decisions coming out of the system. I would say about 90% of the time, we’ve been pretty happy with it. There are some issues, especially with large language models. As I said AI is a collective intelligence, it includes a bunch of statistical models, machine learning models, it also includes large language models. We aggregate all this together. The statistical models and the traditional machine learning models have their limitations. But they generally produce pretty consistent output.
One of the big problems with the large language models is a phenomenon called hallucinations, which we’ve had to keep a really close eye on. Occasionally, these large language models will just speak with such confidence that they really know what they’re talking about. Well, they’re just making up completely fictional scenarios or completely fictional situations, which is something called hallucination. You have to keep a close eye on that. That’s why it’s really important to just double-check everything that these systems are saying.
Ross: Do you have any specific structures for being able to essentially identify or ensure that you’re not incorporating hallucinated content or ideas or decisions?
Kais: For us, the key thing is just human supervision, verification, and validation of what the system is doing. We also want to be compliant with the law. You also need to think about things from a legal, ethical, and regulatory perspective. Especially being a startup based in Europe, Europe probably has some of the strictest frameworks in the world for privacy and compliance, especially with GDPR and so on. One of the big things in European law around AI at the moment, even before the EU AI Act, it’s already been around for a while, is you can’t have a fully automated system in a position that might seriously impact human lives.
It’s a bit of a wide framework and is open to interpretation but if we turned around and said, we’ve got a fully automated AI CEO, it’s calling all the shots, that will be very hard to clear from a regulatory perspective. I don’t think that’s actually allowed right now. But having a system that is producing a strategy that humans are reviewing and then acting on it?, we feel is fully compliant with the law and we think that is the key way to make sure that issues like hallucinations, and biases are kept in check. You need to have that human supervision, we feel.
Ross: In which case, the accountability resides with the human reviewer.
Kais: A hundred percent. That is also what the law wants to happen as well. Because, right now, the law isn’t ready for AI as an executive position, an AI system is not considered today as a legal entity. AI can’t sign contracts, AI can’t hold a legal position. We look at the AI CEO as a functional role, it’s doing that job functionally but you’re absolutely right, Ross, from a legal and ethical perspective, the human executives in the company are ultimately legally responsible. That’s what the law wants. It’s also what we want because we do want humans in the loop here. We do not want a fully automated system. We don’t think that makes sense for a lot of reasons.
Ross: Fantastic. Thank you so much for your insights, Kais. It’s a fascinating experiment. We look forward to hearing more about how the company does under the guidance of your AI CEO and how you continue to progress the project. Is there anywhere people can go to find out more about your work on this?
Kais: You can go to our website hunna.app. If you scroll down, you’ll see a picture of the AI CEO and if you click through there, there is indeed a science research paper where we explained the algorithm. We know we’ve been very transparent and very open about how it works at a high level. We would love to get people’s feedback on it. We’re always open to improvements in how we do things.
Ross: Fantastic. Thank you so much.
Kais: Thank you so much for your time today, Ross.