Skip to content
Home » Opera browser Integrates Local AI Models

Opera browser Integrates Local AI Models

Opera browser Integrates Local AI Models
Share to

Opera, a versatile web browser, competes with heavyweights like Google Chrome, Mozilla Firefox, Apple Safari, and Microsoft Edge. 

Opera is built on the Chromium engine, ensuring compatibility with modern web standards. It claims to be faster, safer, smarter, and more feature-rich than other browsers in terms of security and privacy.

Key Features:

  • Ad Blocker: Browsing with fewer distractions and faster loading times.
  • Free VPN: Enhanced privacy and security while browsing, without any cost.
  • Integrated Messengers: Chat directly within the browser, no need to switch apps.
  • Aria Browser AI: Create with Opera’s free AI on both mobile and desktop.
  • Player in Sidebar: Access music and podcasts seamlessly.
  • Opera Cashback: Automatically earn money back while shopping.
  • Workspaces: Organize tab groups in customizable workspaces.
  • Battery Saver: Extend laptop battery life easily.
  • Unit Converter: Automatically convert time zones, currencies, and units.
  • Personal News: Customizable newsfeed on the start page.
  • Sync Data: Synchronize Opera across all your devices.

Opera is available on Windows, macOS, Linux, Android, and iOS (Safari WebKit engine). Mobile versions include Opera Mobile and Opera Mini.

Opera Integrates Local LLMs

Opera has achieved a groundbreaking milestone by becoming the first browser to seamlessly integrate Local LLMs. These Local Large Language Models empower users to directly manage and access potent AI models on their devices, all while enhancing privacy and speed compared to cloud-based alternatives.

In an exciting announcement, Opera has introduced experimental support for 150 local LLM variants across approximately 50 model families within its Opera One developer browser. This marks a significant leap forward, as it allows users to harness the power of AI without compromising their data privacy.

Among the featured LLMs are well-known names like Meta’s Llama, Vicuna, Google’s Gemma, and Mistral AI’s Mixtral. By utilizing Local LLMs, users can sidestep the need to share data with external servers, ensuring their personal information remains secure.

How can users access these Local LLMs in Opera?

  1. Visit Opera’s developer site.
  2. Upgrade to Opera One Developer.
  3. Select and download their preferred local LLM.

While each model requires 2-10 GB of storage space, the payoff is substantial: Local LLMs offer a remarkable speed boost, contingent on the user’s hardware capabilities.

Opera’s commitment to AI innovation is evident in this bold move. According to Krystian Kolondra, EVP Browsers and Gaming at Opera, introducing Local LLMs paves the way for exciting possibilities within the rapidly evolving local AI landscape.

Key Benefits

All users now can directly oversee and utilize robust AI models on their devices, providing improved privacy and faster performance in contrast to cloud-based AI systems.

Leave a Reply

Your email address will not be published. Required fields are marked *