Scientel Announces Gensonix AI LLM For AMD Radeon Series GPUs

Gensonix AI DB efficiency combined with the power of Meta's Llama 3B model and AMD's Radeon GPU architecture makes LLMs practical on small footprint systems

Published on Feb. 25, 2026

Scientel, a U.S.-based systems technology company, has announced the launch of its Gensonix AI LLM (Large Language Model) solution that is designed to run on the smaller members of the AMD Radeon series GPUs. The company claims that the combination of Gensonix AI DB's efficiency and the power of Meta's Llama 3B model, along with AMD's Radeon GPU architecture, makes LLMs practical on small footprint systems, addressing the power constraints faced by data centers hosting these systems.

Why it matters

As Language Learning Models (LLMs) become more popular, there is a growing need for efficient and small-footprint systems that can run on local networks or in the cloud, rather than relying on power-hungry data centers. Scientel's Gensonix AI LLM solution aims to address this need by leveraging the company's own Gensonix NewSQL AI DB and the Llama 3B model from Meta, along with AMD's Radeon GPU architecture, to provide a more efficient and practical LLM solution.

The details

Scientel's LLM systems are designed to operate in conjunction with its Gensonix NewSQL AI DB, which is capable of storing and handling various data types, including Relational, Document, Text, and Vector data, in their native form. This allows for high efficiency and eliminates the need for data conversion. The Llama 3.2 3B model, developed by Meta, is a high-performance LLM designed for efficient, low-latency deployment on small devices like servers and laptops, offering strong capabilities in tasks like summarization, retrieval, and multilingual dialogue. The AMD Radeon RX6800 GPU, one of the smaller members of the AMD Radeon GPU family, is capable of comfortably running the Gensonix AI LLM system.

  • Scientel announced the Gensonix AI LLM solution on February 25, 2026.

The players

Scientel

A U.S.-based systems technology company that designs and produces highly optimized high-end servers, bundled with its 'GENSONIX™ AI' DB software, as a single-source supplier of complete systems for AI solutions.

Meta Corporation

The company that developed the Llama 3.2 3B model, a high-performance large language model (LLM) designed for efficient, low-latency deployment on small devices.

Advanced Micro Devices (AMD)

A company that produces Radeon GPUs (Graphics Processing Units) for rendering images, video, and 3D graphics on desktop computers, laptops, and consoles.

Norman Kutemperor

The CEO of Scientel.

AMD Radeon RX6800 GPU

One of the smaller members of the AMD Radeon GPU family with 16GB of memory that can comfortably run the Gensonix AI LLM system.

Got photos? Submit your photos here. ›

What they’re saying

“We are pleased to provide a full LLM solution supporting 3 Billion parameter models even for the smaller members of the AMD Radeon series GPUs'”

— Norman Kutemperor, CEO (EINPresswire.com)

The takeaway

Scientel's Gensonix AI LLM solution, which combines the efficiency of its Gensonix NewSQL AI DB, the power of Meta's Llama 3B model, and the capabilities of AMD's Radeon GPU architecture, represents a significant step forward in making large language models practical and accessible on small-footprint systems. This could help address the power constraints faced by data centers hosting these models, potentially leading to more widespread adoption of LLMs in a wider range of applications and settings.