Google's AI chatbot, Bard, is making strides in improving its proficiency in logic and reasoning tasks, particularly in mathematics and coding. A recent blog post by the tech giant indicated that a technique named "implicit code execution" has enhanced Bard's skills in these areas.
Bard, like other large language models (LLMs), is essentially a prediction engine that anticipates the next words in a sentence based on given prompts. While this capability makes Bard highly effective in tasks like writing emails and essays, it has exhibited more difficulty with software development.
Some might question this assertion, pointing to code-generating models like GitHub's Copilot and Amazon's CodeWhisperer. However, unlike Bard and other general-purpose models, these were primarily trained on code samples. Bard and similar chatbots have been trained on a wider range of text samples, including web content, ebooks, and other resources.
Keen on addressing Bard's deficiencies in coding and math, Google devised implicit code execution. This allows Bard to compose and run its own code, testing it to generate more accurate responses. According to Google's internal benchmarks, the revised Bard's responses to computation-based word and math problems have improved by 30% over the previous version.
However, Bard's improvements do not guarantee infallibility. The model may not always generate code to aid its responses, may generate incorrect code, or may exclude executed code from its responses. Despite these limitations, Bard's enhanced logic-driven capabilities mark significant progress in Google's AI development efforts.
When Bard was first launched, it received less-than-stellar reviews. However, with the implementation of implicit code generation and other enhancements, Google hopes to reverse the initial negativity and compete effectively with leading AI chatbots.