software development agency
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Our Articles

MVPs
Why can AI become a good choice for venture capitalists?
September 10, 2024
11 min read

In the Innovantage podcast, Sigli’s CBDO Max Golikov talks to tech experts and entrepreneurs about their vision of how artificial intelligence is transforming the world. The 4th episode covers much more than that. Leesa Soulodre, who was the podcast guest, explained not only the role of technology in modern society but also the role of society in tech progress.

In the Innovantage podcast, Sigli’s CBDO Max Golikov talks to tech experts and entrepreneurs about their vision of how artificial intelligence is transforming the world. The 4th episode covers much more than that. Leesa Soulodre, who was the podcast guest, explained not only the role of technology in modern society but also the role of society in tech progress.Check out the full Innovantage episode with Leesa Soulodre here: https://www.youtube.com/watch?v=D5oANROV8X4&t=1373sLeesa is the founder of R3i Group and the Managing General Partner at R3i Capital, a deep-tech cross-border venture capital firm that focuses on AI and sustainable development.Deep-tech startups: Key pitfalls on their wayLeesa’s firm helps projects connect to capital, customers, and non-dilutive financing that lets startup founders keep full ownership of their companies.Today, founders who work in deep tech (or in other words, who build projects that are based on high-tech innovation or significant scientific advances) traditionally face three main challenges.A commercial challenge, or commercial Valley of Death. That’s the period when a startup has already begun operations but hasn’t generated revenue yet. Founders need to get over this period to make sure that their product does what it says on the tin and has value.A technical challenge. It’s important to demonstrate that a product is capable of consistently performing in the same way every time so that it can be safe for the person to use it.An ethical challenge. This challenge is related to the fact that everything that we use today almost always doesn’t have a kill switch. Therefore, inherently a product needs to be safe in its provision.How to make sure that AI developments are safeWhile talking about the safety of using AI products, Leesa recollected some other well-known cases. The proliferation of Airbnb made everybody’s homes available to guests. This led to the need for the implementation of trust and safety teams. With the growing popularity of taxi services, everybody’s car can be used as a taxi. This again highlighted the demand for such trust and safety teams.What do we have to deal with in the case of AI? The situation may look rather alarming. In fact, deep neural network compression technology could be used to kill more people faster and with less energy than other technologies.And if you consider that every AI product can be used for such a purpose, it becomes obvious that we need to maintain this notion of trust and safety teams. This is vital to make sure that our technologies are not misused for unintended purposes.Any AI organization with as much influence as OpenAI has that is not going to invest in trust and safety teams will face significant legal and regulatory hurdles. Moreover, such companies will have even more issues with further growth and innovation in the future if they don’t gain implicit trust from their user base.Regulation in the AI spaceWhen it comes to the regulation of tech companies, there always have been some controversies. One of the main reasons for this is that regulations don’t tend to catch up fast enough with what tech companies are doing.In the discussion of this aspect with Max, Leesa mentioned that she sits on the board of the AI Asia-Pacific Institute. She communicates with representatives of the governments in the region. The governments want to build safety rails for technologies, and AI in particular. But there is a significant barrier.Let’s take Singapore as an example. The absolute majority of the registered Generative AI companies are just starting their business journeys. They are at their seed or pre-series A stages. This means that quite often they do not even know what they have on the tin. They do not know the value of their products. That’s why it’s natural that they are not ready to invest in regulatory oversight. Given this, there is no sense asking them to do that.Leesa believes that it will be more sensible to build guard rails into the fabric of the major technologies that underpin new products and solutions created by startups.For example, many GenAI companies are building their tools on the back of the technologies developed by OpenAI, Microsoft, or Amazon. So it makes more sense to start with these tech giants. They need to comply with regulations first.Is the use of popular LLMs the key to success?While talking about mature AI technologies that startups can rely on, as an example, Leesa mentioned Hugging Face. It is a versatile platform that is widely recognized for its open-source repository of multiple large language models (LLMs).Leesa’s VC firm works a lot with startups. Around 2,000 projects claimed that they were using Hugging Face. At a close investigation, it was revealed that only 200 were truly using it. And only 16 were fundable in the opinion of R3i’s experts.Today there are a lot of players with similar offers. Leesa noted that both open-source and commercial models, like ChatGPT, can be a good option for new technologies. But here, it’s vital to understand that they serve different purposes. For example, ChatGPT is perfect for converting high volumes of always the same automation tasks.As an investor and technologist, Leesa is interested in finding technologies that will work as efficiently and safely as possible and bring the highest value.She said that she doesn’t invest in ChatGPT-like solutions. She looks for applied AI technologies around critical infrastructure and highly regulated industries. The projects that are the most interesting to her are those that can bring tangible results. They can power transformation from point A to point B in such domains as smart cities, energy, healthcare, industrial manufacturing, water management, agriculture, mobility, space safety, security, surveillance,For instance, she mentioned that her VC firm often invests in technologies for renewable energy. Already now, it is a highly regulated sector, despite being comparatively new.Deep tech investing: When is it a good idea to support an AI project?In the discussion with Max, Leesa mentioned that today there are a lot of AI-related projects that may look quite appealing for investors. But in reality, they may turn out to be a mouse trap.Working with the deep tech industry, Leesa prefers to invest only in those projects that have deep scientific research and technological invention behind them, which can be proved by a patent pool or a data mode. And if a project has a data mode, it should be its own, not the one that Microsoft or OpenAI possesses.But what are these things? And why do they matter?Leesa explained this with real-life examples. When scientists at the university invent something, they need to protect this from being copied or misused. Almost always in such cases, they can apply for a patent that will protect an idea or an innovation. If somebody else wants to use this innovation, they will need to obtain a license.However, Leesa warns about one serious challenge related to patents. When you publish a patent, everyone can learn what it is about. Unfortunately, at the moment, patents, especially in the software development industry, are not protected well enough.Patents themselves can be viewed as assets. Even if a project fails or the development of the technology is frozen, founders will still have a patent that can be further sold.As for data, it also can be monetized. If your company is carefully collecting, identifying, classifying, and tagging data, you (or somebody else, who will get access to it) can use it to create new products or power the existing ones.Generative AI for patents: Can we trust it?While talking about the capabilities of generative AI, Leesa stated that it is fantastic for ideation, and especially brainstorming. Nevertheless, it’s vital to understand that such models are hallucinogenic. It means that they can provide wrong or irrelevant answers. It may happen because the training data was incomplete or biased. And that is just a bright demonstration of the “Garbage in, garbage out” principle. Moreover, hallucinations may happen because AI models often lack constraints that can limit possible outcomes.That’s why we can make the following conclusions from such a situation. First of all, we always should be very careful and attentively check whether the received information is true. And secondly, despite the advancements in GenAI, we still need human creativity, empathy, and ingenuity. That’s what AI can’t ensure at the moment.Innovation timing: Cinderella effectIt’s not a good idea to come to the ball too early, as nobody will turn up at that time. But you also shouldn’t come too late as you will miss all the fun. You should be just in time. That’s why before introducing a new technology, it is necessary to analyze whether the market is ready to receive it as well as to consider the key barriers to its adoption. The psychosocial aspect is important. People should trust you and your solution.It’s vital to listen to different opinions to detect possible unintended consequences that may stay unnoticed for founders. When a team is working on a new product, they have only one perspective. But when you are building something new for a community, you should know what impact your innovation will have on it. It’s also worth mentioning that the impact on one community may differ from the impact on another one.It may sound surprising but in many cases, it will be very sensible to listen to children as well. Today, there are even some tech events for kids. And that is a very good trend. One day they will become active users of technology. That’s why their voices, their questions, and their doubts also have value.Speaking about the technologies that Leesa has invested in, she mentioned a couple of examples. She described them as absolutely revolutionary from the perspective of activation and implementation.Quantum Brilliance. The company works on room-temperature diamond-powered quantum computing. In other words, thanks to the use of synthetic diamonds, quantum accelerators will be able to work at room temperature. Though the history of this project is just beginning, it promises to bring quantum computing to a wide audience and make it everyday technology. This approach will be able to revolutionize every facet of a smart city, including security, drug discovery, material science, data operations, etc.ViewMind. That’s a brain health company. Its technology can look into your eye and capture millions of eye movements in a single view over 10 minutes. Based on this, it can determine with a very high degree of accuracy what level of degeneration you have or are likely to have in your brain. Such examination can help to manage diseases like Alzheimer’s, dementia, multiple sclerosis, Parkinson’s, or even post-traumatic stress for a soldier. This technology can demonstrate which area in the brain is affected and help to deliver personalized treatment. These types of technologies can absolutely change our lives for the better. They can move us from what we call treatment to the prevention and prediction of diseases. In the case of healthcare, such an approach is of great value.The most promising, value-based technologies should do something at least slightly better than it is done today and can greatly change the way we perceive something. For example, massive carbon emission reduction technologies change the way we think about the use of energy and water.What is the greatest threat to economic growth?While talking about new technologies and economic prosperity, Leesa said that one of the biggest concerns is piracy, both physical and digital.Piracy is one of the factors that can affect supply chains, steal jobs, and put economic development under threat. For example, when digital versions of books are provided for free, people who contributed to their creation lose their wages.Nevertheless, despite a huge negative effect, sometimes it is possible to detect some positive sides. This way of distribution can play an important role in the digital preservation of some media that no longer generates profit but could still be valuable in terms of history, art, or culture. Moreover, this can open new opportunities for those who have limited access to legal distribution infrastructure.How to decide where to investDifferent investors may apply their own methodologies to their decision-making process. Leesa shared that at R3i Capital they also have their own philosophy when it comes to choosing projects. One of the most important things that they pay attention to is the team.To avoid unconscious bias, R3i Capital relies on an AI engine built in cooperation with Hatcher. As a result, every team goes through the same filters and the result of such evaluation is as objective as possible.Thanks to this approach, absolutely everyone has the same chances. Such a system allows the VC firm to give voice even to those groups that are often ignored, like women and minorities.Moreover, it’s very important to analyze how fundable the company is and what its likely impact is. According to Leesa, while doing this, it’s highly required to keep 100% transparency. This will ensure the desired trust between founders and capitalAfter all, capital markets do not need to be brutal. They should be fair instead. This will help to achieve a win-win interaction.Why does sustainability matter?Leesa explained that with its investments, R3i supports tech companies with a tangible ESG product impact. In other words, they focus on products that prioritize environmental issues, social issues, and corporate governance.However, investors sometimes say that they do not care about the environment and sustainability, they care about money.But how can a healthcare product not improve patients’ lives if it is a good healthcare product? The same is true about energy, cybersecurity, mobility, and other industries.Becoming more sustainable, environmentally friendly, and socially valuable doesn’t mean getting less money. In fact, often, it can even mean more money. This can be explained by people’s willingness to pay for the things that can enhance the quality of their living.If an offered product harms people or has massive negative consequences, society won’t trust it.As venture capital firms are interested in long-run outcomes, they try to make bets on winning technologies. Sustainability businesses that focus on social and environmental effects are definitely among them.Winning together, not aloneAt the end of this discussion with Max, Leesa shared her thoughts about the role of society in innovations. One of the key recommendations that she can give to everyone is to be more empathetic to each other and common problems.Sometimes when founders can’t get financial support from governments or corporations, they can receive help from other people who care about solving the problems addressed by their projects.To achieve success, it’s very important to take the next first step. And we can’t do it alone.At Sigli, we share this vision and that’s one of the reasons why we create the Innovantage podcast episodes.If you are also fascinated with the capacities of AI and other emerging technologies, as well as their power to change the world, stay with us. New inspiring ideas are coming soon!
Generative AI Development
Has AI become mainstream now and are we ready for that?
August 6, 2024
10 min read

In the second episode of the Innovantage podcast, Max Golikov talked to Vasil, the Chief Delivery Officer at Sigli, a person who was captivated by AI long before it became available to a wide audience heard about. This sphere looked completely different from run-of-the-mil computing which made it extremely interesting for him. Being inspired by such films as Terminator and Star Trek, Vasil chose AI as his major.

Today, when the AI revolution seems to be gaining momentum, for businesses it’s very important not to miss their chance to join it, or maybe even to head this transformation. At Sigli, we want to help you gain a competitive advantage by explaining how you can leverage the power of this technology.Check out the full Innovantage episode with Vasil Simanionak here: https://youtu.be/osnlRp0RMT8?si=qT6OYYcbyiVTI8OeIn the second episode of the Innovantage podcast, Max Golikov talked to Vasil, the Chief Delivery Officer at Sigli, a person who was captivated by AI long before it became available to a wide audience heard about. This sphere looked completely different from run-of-the-mil computing which made it extremely interesting for him. Being inspired by such films as Terminator and Star Trek, Vasil chose AI as his major.In a dialog with Max, Vasil shared his vision of the past, present, and future of Artificial intelligence and named the task that he will never delegate to AI.In our article, we’ve gathered the most interesting ideas from this discussion and we hope that you will find them quite insightful.AI: When everything beganIt would be completely wrong to say that AI appeared together with ChatGPT or 1–2 years earlier. In reality, some products powered by AI of this or that kind were developed quite long ago.The first expert systems were delivered around 50 years ago and they already represented an example of a very narrowed AI. Of course, their capabilities, as well as use cases, were rather limited.For example, such systems could have been used by a lawyer in some specific cases. Lawyers often need to ask standard questions to their clients, like the place of birth, the date of birth, the place of residence, etc. Based on the answers to these questions, an expert system can prepare a document that will be further submitted to some authorities or used for other purposes.So what are expert systems? They can be defined as early forms of AI that rely on a set of rules provided by human experts to make decisions or solve problems within a specific domain.The development of these solutions is related to usual coding stuff because such things are based on conditions like “If something — Then do something”. The main task and challenge in this case is to define the right rules. This means that human experts who work on these rules should deeply understand the specificity of all the related processes.Is ChatGPT an example of AI?The next stage of AI development is something that is considered to be AI in our modern understanding.While expert systems were difficult to understand for the general public and they had only specific narrow use, with ChatGPT-like models everything is different. They have gained enormous public attention and they are available to everyone. These solutions allow users to input queries and get clear results.While talking about that kind of system, in the majority of cases namely ChatGPT will be mentioned and that’s an example of excellent marketing and branding.The majority of people definitely consider ChatGPT to be AI. But is it true? While talking about that Vasil highlighted that the correct answer depends on our perspective and exact understanding of artificial intelligence.On one hand, large language models (LLMs) do not have common sense but they can process data. They are built on neural networks that mimic the human brain.A neuron has, for example, two inputs and a single output. If the first input is triggered, an output will be triggered. If the second — an output won’t be triggered. In networks, neurons are put in millions of layers. Users need to make an input and wait for an output. That’s how they work.When it comes to deep learning with LLMs, we do not define the underlying model to process this data. We just define a kind of infrastructure with the neural network where we have a lot of neurons and they are interconnected at different layers.We throw data and expect the result. But even a creator of this model has no idea how an LLM will answer.Due to the huge media influence, today these ChatGPT-like solutions are widely believed to be true AI despite some limitations in their capabilities.Basics: What is AI?AI is a huge set of everything related to something that machines can do quite similar to what humans can do. Of course, people can calculate but a calculator is not an AI solution. So we can say that in the context of AI, machines should do something as well as humans can or maybe even better.Despite all aspirations around AI, it is still a tool, not a different species or something like that.Different levels of AIToday, we can define several models (or levels) of AI. They differ from each other not only in their functionality but also in how they deal with data. Let’s briefly summarize them.Expert systemsAs described above, expert systems do not actually work with data. These systems are nice straightforward tools but they do not provide you with the impression that you deal with intelligence.ML modelsML systems work with some data but there are no strict rules. Engineers and analysts define the model of how this data should be gathered and processed. So we have control over how the solution will work with our data. We throw this data into this model and we check how to use it.A good example here is an ML-powered app for the real estate market. You can input different parameters like the size of an apartment and its location, while an app calculates the price depending on the parameters.Large language modelsText models are the simplest ones of this kind. They operate on the text input and can convert this text into a new one. Here, their work can be compared with the work of programmers who need to convert requirements into code.When an output offered by the model is not good enough, a user can provide feedback. In such a way, a model can be trained to ensure better outputs.Moreover, there are a lot of talks about the quality of data used for training and their origin, such as, whether they were obtained and used legally or illegally. However, there is still no single opinion on that.Will humanity be killed by AI?That’s one of the questions that may sound really controversial and sometimes even a little bit naive but it’s really interesting how AI experts answer it. Vasil provided a quite worrying reply. He said that everything depends on our behavior. Nevertheless, it’s not a reason to look for ways to be as good to AI as possible in order to survive. It’s just a reason to study this aspect a little bit deeper.According to Vasil, there is a possibility that AI will exterminate humanity and there is also a possibility to see a dinosaur outside. But still, it is just a possibility.Our future, and our chances to stay alive:), will depend on how AI-powered solutions, including LLMs, will be designed and how we will use them.If we let any ChatGPT-like solution interact with the internet, it will be able to perform rather complex tasks. For example, it will be able to start a website, buy a domain (if you give it some money), and create a no-code or low-code platform.Even an LLM can interact with the real world and actual AI can greatly mimic a human not only in text conversations but also in live streams. If you have ever seen videos with lifelike talking faces generated by Microsoft’s VASA, you know that they can be very convincing.So can AI overtake the world? Theoretically yes. But only if a human lets it do this.Day-to-day applications of AI for actual businessesIn the conversation with Max, Vasil named several examples of widely adopted business use cases of AI.Content generation. AI can be also applied in numerous situations when it can take some input from the user and create some kind of content based on it. AI can compose a good text for your email even if you have just a couple of bullet points.Summary creation. AI can be a great helper in ingesting the content that was created by someone else. For example, let’s imagine that you have a 20-page PDF file and you need to get a general understanding of its content, how much time will you need? What if this document contains 200, 2000, or 20,000 pages? AI can process it and offer you a quick summary much faster than any human can. What is even more surprising here is that for AI, 20,000 pages and 20 pages are just the same.Support services. AI doesn’t get tired, it doesn’t get distracted, it doesn’t have bad days. It has no emotions — and it is its win part. That’s why you shouldn’t hesitate to ask as many questions to AI as you have. It won’t be annoyed. Vasil admitted that in his everyday work, he also does this way in order to get as much relevant information as possible. When tested with humans, AI turned out to be more polite and tolerant. That’s why AI-powered apps can be a good choice for first-line support services that deal with general issues and common queries before proceeding to specialized help.AI is always willing to help and can reduce the time to answer to a client. However, in this context, it’s important not to omit a financial factor. If you want to get almost real client support that will function practically without human participation, this will turn out to be more expensive than hiring human specialists.How much does it cost to implement AI?The cost of such projects can greatly vary based on various factors and parameters. For example, the basic infrastructure for models like ChatGPT represents a huge number of graphics processing units or GPUs. This specialized hardware is essential for processing complex computations, as well as training and running AI models.That’s why it will be necessary to calculate the cost of GPU rental services provided by Nvidia or Microsoft, for example. They have different subscription models that can address different needs.Moreover, you can opt for on-premises infrastructure and locate all the required software and hardware resources within your physical premises. This model will be also associated with some additional expenses.If we turn to the use of AI models, here, we will also have various scenarios.Vasil noted that in the case of using a commercial model when you do not need to train it, the cost of one query will be a couple of cents. However, when you need to train and finetune your solution, it will be a completely different story. The price will be significantly higher and it’s very challenging to define it.It’s also crucial to bear in mind that with LLMs, you can’t expect a 100% correct result for every query. That’s why to get the desired outcome, several interactions may be required.In any case, the principle of quality-ratio principle works here quite well. The bigger your investment is, the better result you can expect. However, you should admit the fact that it won’t be a human result. Given this, businesses should find a balance between the amount that they are ready to pay and the quality that they will accept.Future of AI: Will it replace human experts?While talking about the future both Max and Vasil agreed that technologies are changing too quickly. It’s very hard to make any predictions for more than 5 years.However, according to Vasil, in the near future, ChatGPT and similar solutions can become great personal assistants. The use of such assistants can go much beyond purely business applications. For example, they will be able to check the health of users, send them reminders, and fulfill a lot of other tasks that will make people’s lives better,Another interesting and highly promising sphere of AI use is communication which is highly important in business.Let’s admit that even when speaking the same languages, we all have different understandings of some things. AI-powered personal assistants can make sure that our thoughts can be perceived by others in a good way.ChatGPT-like systems will be able to translate our ideas into bigger definitions that are more comprehensive for others. They will serve as bridges between people as they can translate not just words one by one. They can translate what is really said.That is a positive side of their implementation. Nevertheless, there is a negative one as well: some translators can lose their jobs.When a human is better than AI?One of the key issues about AI highlighted by Vasil is that you can’t always check whether ChatGPT offers you something that is true or not. That’s why according to him, it’s definitely not the best idea to rely on AI in explaining something to children. Here, a human is an undisputable leader (especially, when it comes to your own child).Of course, there are solutions like Google’s Gemini. In this case, answers are googleable and you can see the source of information. Nevertheless, AI can’t fully understand the context in which a child can pose this or a question. Moreover, human interaction is something that we all need.What skills are vital in the AI era?During their discussion, Max and Vasil also touched on a very important topic about the skills that are required today.Earlier, teachers and books were the sources of truth for the young generation. Then, the internet joined this list. Now, everything is quite unclear.What sources can be trusted? Whom can we believe?That’s why for a new generation, it is very important to develop an ability to check the source of data and understand whether it is trustworthy. A human can be good at some things but can be completely wrong in others. Given this, it’s crucial to have critical thinking and see whom and when we can trust.While talking about the value of AI, Max and Vasil also highlighted the importance of human connection and personal touch in communication. These are something that we should preserve even in the era of AI and significant digital transformations.If you want to learn more about AI, its current role for businesses, and its future prospects, do not miss our next episodes of the Innovantage podcast hosted by Max Golikov.
AI Development
Innovantage podcast: Will AI change education?
July 16, 2024
15 min read

AI has become a buzzword. But do we really know a lot about it? Can we fully leverage the new opportunities that it brings to us? To dive deeper into this topic and to make this space more transparent to everyone, we’ve launched the Innovantage podcast. In the series of episodes, Sigli’s CBDO Max Golikov will talk to AI experts who will share their professional opinions on how AI is transforming the world around us.

AI has become a buzzword. But do we really know a lot about it? Can we fully leverage the new opportunities that it brings to us? To dive deeper into this topic and to make this space more transparent to everyone, we’ve launched the Innovantage podcast. In the series of episodes, Sigli’s CBDO Max Golikov will talk to AI experts who will share their professional opinions on how AI is transforming the world around us.Our first guest is Dominik Lukes, System Technology Officer at Oxford, who runs the Reading and Writing Innovation Lab. Dominik has been exploring the potential of artificial intelligence since the early 90s, long before the world got familiarized with ChatGPT.In this episode of the Innovantage podcast, Max and Dominik discussed the impact of AI on the education sector and its potential to revolutionize the academic environment. Moreover, they touched on the basics of generative AI, the working principle of LLMs, and even the probability of an AI apocalypse.Check out the full Innovantage episode with Dominik Lukes here: https://youtu.be/7b4v6xnRDLI?si=8B4e_zuhnsKyrPwgHave no time to watch it now? We’ve prepared a short summary for you!Key terms that you need to knowTo begin with, let us briefly explain the main terms that are related to the topic under consideration.Any AI tool is based on a model and a model is a set of parameters. Namely, these parameters ensure that if you feed something into a model, it will give you something back.The models that are used in ChatGPT and similar solutions are models that generate language.What are LLMs?But what does it mean when we say “Large language models” (LLMs?) What makes them large?A free version of ChatGPT that is available at the current moment relies on a corpus of about half a trillion words or thereabouts, which is an enormous number. As for GPT-4, OpenAI hasn’t revealed precise figures. But when Meta released a new large language model called Lama3, they said it was pre-trained on over 15T tokens that were all collected from publicly available sources.The bigger the corpus used for pre-training is, the higher quality you can expect.There are also parameters that should be applied to make models work. There are some small models with 8 billion parameters, while large models have hundreds of billions.Why do you need to pre-train AI models?An interesting thing that was an important breakthrough in AI is that it is not necessary to train AI for every single task separately. You can take all these 15 trillion tokens and pre-train a model with some basic cognitive capabilities.After pre-training, it’s time for fine-tuning on top of that which will make your model do other things. The companies are constantly fine-tuning the models. That’s why the models are changing. For example, there is a thing that worked last month. But it may not work this month.To achieve the desired results the data used for pre-raining has to be clean, has to be carefully selected within the abilities of your algorithm. Your model has to be pre-trained and then fine-tuned for a particular purpose. And that’s one of the things that will make your solution work better.How do AI models work?The work of models can be compared with a regression curve, which is kind of a prediction curve. While there is an opinion that such models work on frequencies and occurrences, that’s true. What they have inside are weights and relationships.Dominik compared such models with semantic machines. So they are semantic in the sense that they understand relationships between things, but they’re not semantic in the sense they don’t understand the world outside themselves.GPT: What is it?Have you ever thought about what this abbreviation can mean? Actually, these three letters stand for what we’ve just explained about the work of such models.G is for Generative. It means that the model is capable of generating text.P is for Pre-trained. It means that the model should be pre-trained on a large corpus of data for learning patterns, grammar, facts about the world, and getting some reasoning abilities.T is for Transformer. This refers to the underlying architecture of the model used for natural language processing.AI hallucinationsIf you have ever worked with LLMs, you’ve probably noticed that sometimes they can provide inaccurate answers or “invent” something that really doesn’t exist. It means that these models can still hallucinate despite massive improvements.It can happen because models are trained on data. They learn to make predictions by finding patterns in the data. Nevertheless, due to biased or incomplete data, your AI model may learn incorrect patterns which will result in wrong predictions.How to make AI work betterUnfortunately, AI models can’t teach us how it is correct to communicate with them. And let’s be honest, interacting with AI is not just the same as communicating with a person.You should be ready for a ”rollercoaster”. It means that sometimes AI tools can go much beyond your expectations, while sometimes its outputs may be disappointing for you.To achieve better results, you should experiment, try different prompts, and elaborate your own approach to make AI solve your tasks.Not by ChatGPT alone: AI-powered tools that are used nowWhen ChatGPT was made publicly available in November 2022, it caused enormous hype practically immediately. Let’s be honest in mass perception, ChatGPT has become a synonym for generative AI. Nevertheless, that’s far from being true. Today there is a huge number of various tools, the functionality of which can greatly differ from what ChatGPT offers.First of all, you can start your familiarization with AI with the so-called Big Four. Apart from ChatGPT by OpenAI, it also includes:Claude by Anthropic;Gemini (formerly known as Bard) by Google;Copilot by Microsoft.People take these popular models and use them to build different tools.For example, Elicit is such a tool that can help you with your research. It can search for papers and extract information from them. Of course, you still will need to check it but you will get a really good draft.There are also projects that leverage the possibilities of the released coding IDE of GPTs. It allows people to create, for example, custom bots within ChatGPT or Copilot.By using the APIs, it is possible to build solutions outside of these platforms.According to Dominik, currently, we are at the stage where everybody is trying to see what AI can do for us now. But also we are starting to explore what it can do for us in the future and what are the possibilities.Such a highly respected educational institution as Oxford is also actively discovering the potential of AI, along with the rest of the world. Dominik shared that they are experimenting with ChatGPT, its enterprise version, integrations with Copilot, as well as other innovative tools powered by AI.In this case, for researchers, it’s highly important to understand what students think about various solutions, what they find useful, and how they can benefit from the integration of AI into the learning environment.Dominik also shared his personal thoughts. According to him, Claude is a good tool for educational purposes. It can deal with long context. It means that you can upload the entire academic paper and ask it to provide you with a summary or to find some specific information in the text. This feature makes Claude different from ChatGPT. And it can be highly helpful not only for students or professors but also for businesses.Homework is dead. But what about education itself?When it comes to education and the changes that AI has brought to it (and will bring in the future), a lot of people are concerned about the possibility of checking the level of students’ knowledge. And their position is quite clear.For example, earlier the format of home exams was rather popular. Students received tasks and were asked to do them at home. Now, when we have so many AI-powered tools at hand, such tasks can be fully useless.It’s obvious that you can no longer pretty much trust that all students who will hand in their essays have written their works entirely on their own. Such things as composition, spelling, grammar, and some other objective points that professors take care of can be checked and improved by AI solutions. Of course, they are still far from being perfect when it comes to research and in-depth analysis. However, that’s something that we have on the horizon.Some teachers try to apply so-called AI checkers that are expected to detect AI-generated content. Nevertheless, AI experts insist that today there aren’t any reliable tools that can identify such content with 100% precision. There are different big and small models and they generate content in different ways. Moreover, their outputs greatly depend on prompts. As a result, we can’t trust the results shown by these checkers.How AI is integrated into the academic process at OxfordBut how can professors motivate students to learn new materials if even their homework can be done with the help of artificial intelligence?Professors at Oxford have their own approach to the academic process that can be a good solution for many educational establishments. A big part of the educational activities are happening in small groups. It means that students have a lot of discussions. So when they submit papers, they also have to talk about them afterward.As for exams at Oxford, a lot of their examinations take place in an invigilated environment. So professors can see what the students are using.Dominik is quite optimistic about the integration of AI into the education process. Though it’s too early to speak about its mass adoption its further implementation will definitely continue. And the task for both educators and students is to find the best way to use artificial intelligence for their needs.AI for teachers: How to use it nowMax and Dominik also talked about the use cases when teachers can apply AI already now.Here, Dominik shared one simple principle of working with AI solutions: You should ask the right thing from the right tool. For example, ChatGPT can be really good at explaining math terms and concepts but it is really bad at calculating and solving math tasks.Similar things can be observed in other disciplines. Language teachers can greatly benefit from the ability of AI to create multiple-choice tests for students about a text or a grammatical feature. And here AI can perfectly cope with such tasks.Nevertheless, if you are going to ask an AI model to create fill-in-the-blank grammar exercises, you shouldn’t have any high expectations. In this case, AI can offer the wrong option or provide the wrong gaps where something should be added. Quite often, if you ask AI to give you an example of a grammar feature, you will get an answer that won’t satisfy you. But when AI is generating a text for you it won’t make such mistakes.AI generation still requires strong human supervision, just like an intern. It can work for you but you still need to control the provided results.Skills for future students to work with AIThe educational environment is changing. How can we get ready for this AI-enriched world? Are there any specific skills that people should try to develop in order to work better with the newly introduced tools?While answering these questions, Dominik highlighted that it is impossible to name any precise skillset.However, here’s a list of recommendations from a person who has been working with AI for many years:Keep exploring.Keep trying it.And do not think that if you have used an AI a few times you have explored the entire frontier of its capability.Maybe in a year or two, professionals will find some skills that you need to know but not now. There isn’t just one best tool or the best skillset to be used in the academic environment, as well as in any other space.AI for disabilities: Can it help people to overcome barriers?Speaking about AI, it is also interesting to note the potential of such tools to change the quality of life for people who have different types of disabilities. And here, it’s worth paying attention not only to what such solutions can offer in the educational context but also in the context of everyday tasks.Such tools as screen readers or text-to-speech solutions can be highly useful for people with low vision and different kinds of visual impairments. It is possible to take any webpage and ask AI to voice what is written or shown there. In other words, even if a person can’t read or see something on his or her own, AI can do it. Of course, inaccurate outputs caused by AI hallucinations are still possible. But that’s already a great step forward.AI can also be of great help for those who have problems with writing and typing due to dyslexia or any other issues. In this case, people can rely on speech-to-text features, as well as AI-powered grammar and spelling checkers.Given this, we say that artificial intelligence can make a lot of things available to people, even if previously they couldn’t do them.Talking about the capabilities of AI to expand the existing borders for people, Dominik also mentioned that today not speaking English is already a huge limitation. Those who do not know this language are cut off from a huge part of the world, especially when it comes to learning. A lot of materials are provided only in English. And here AI can also demonstrate its power. You do not need to wait till this or that research is translated into your native language. You can ask AI to do it for you and get a quick result.And…Is an AI apocalypse inevitable?Let us be fully honest with you. That’s just an eye-catching subheading. While some people are trying to guess what is going on in GPT’s mind, such experts as Dominik already know the answer. Nothing. Really nothing is going on in GPT’s mind till the moment we send a question to the chatbot.We are learning constantly, even when we are sleeping our brains are changing.Large language models, as well as other AI-powered tools, can’t think as we can. They are not exploring the world around them. If there are no requests from users, such models are sitting quietly just like a blob of numbers on your hard drive. It means that we should feel completely safe.Instead of the final wordThe AI industry is advancing at an enormous pace. Even a couple of months can bring impressive changes, and half a year feels like a leap into a new era. That’s why it’s practically impossible to predict what comes next and when. So let’s wait and see how AI tools will evolve soon and how education and other spheres will be impacted by these changes.Looking for more insights from the world of AI? Follow us on YouTube, like our videos, ask questions in the comments, and do not miss the next episodes of the Innovantage podcast hosted by Max Golikov.
software development agency

suBscribe

to our blog

Subscribe
Thank you, we'll send you a new post soon!
Oops! Something went wrong while submitting the form.