So, have you ever wondered if there’s a place where AI codes better than seasoned developers? Well, large language models are already pretty good at handling basic tasks like creating simple web apps. They can even help with more complex stuff, offering insights on architecture and answering specific development questions.
Think of an AI language model as a helpful teammate. It brings fresh ideas and can really boost your productivity. But when it comes to really tough coding challenges, most people agree that you still need the skills of experienced human coders.
“Of course, people are still needed,” says Jani Väisänen, CTO of Unikie.
Language models are great at assisting humans. The clearer you are with them and the more detailed info you provide, the better their solutions will be. For example, if you give clear requirements and detailed data, a language model can help you create well-structured software.
Unikie has a lot of experience using language models, especially for optimizing software that runs on GPUs (graphics processors). GPUs are used in things like embedded systems, mobile phones, computers, and game consoles. Optimizing them requires a good understanding of parallel processing and hardware architecture. You need to look inside the processor, understand its block-level solutions, and see how different paths are used to read information. GPU optimization focuses a lot on using the processor’s internal memory.
Working with AI
Väisänen gives a good example of how a language model can help a developer start a new task.
“Imagine a developer needs to use the Mamba deep learning architecture for GPU sequence modeling but isn’t sure where to start,” he explains. “In this case, they can give the language model a high-level description of the task, and the model will guide them in the right direction.”
GPUs are different from general-purpose processors because they can run multiple computing processes at the same time, which can be tricky for humans to manage.
“If the signal passes through the GPU too slowly, a person might focus on fixing a single point in one path. We’ve found that a language model can look at the whole system more effectively than a human,” Väisänen notes. “But first, the model needs to know the specific GPU’s hardware details, like buses, memories, and interfaces, because AI can’t interpret block diagrams well enough yet.”
Boosting Performance
So, can a language model produce better code than a human if it gets a good description of the processor?
“We’re not there yet,” Väisänen cautions. “We’ve noticed that AI tends to create too many barriers in the code.”
A barrier is a synchronization point where the program waits for parallel processes. While necessary for GPUs, too many barriers can mess up optimization. Despite these limitations, Unikie has seen significant improvements in productivity and programming results with AI language models.
“The language model has been very effective in optimizing GPU memory usage. Optimization is crucial in limited environments with small memories and computing power, often resulting in better code. For instance, in one project, the original implementation had fourteen kernels, but after AI-assisted optimization, only five were needed,” Väisänen describes. “But it still takes a skilled professional to guide the language model on the specific hardware being optimized.”
Learn more about Unikie’s offering for AI & Data Analytics
This article was originally published in Finnish in Tekniikka & Talous.