We have hosted the application mlx engine in order to run this application in our online workstations with Wine or directly.


Quick description about mlx engine:

MLX Engine is the Apple MLX-based inference backend used by LM Studio to run large language models efficiently on Apple Silicon hardware. Built on top of the mlx-lm and mlx-vlm ecosystems, the engine provides a unified architecture capable of supporting both text-only and multimodal models. Its design focuses on high-performance on-device inference, leveraging Apple’s MLX stack to accelerate computation on M-series chips. The project introduces modular VisionAddOn components that allow image embeddings to be integrated seamlessly into language model workflows. It is bundled with newer versions of LM Studio but can also be used independently for experimentation and development. Overall, mlx-engine serves as a specialized high-efficiency runtime for local AI workloads on macOS systems.

Features:
  • Apple MLX-optimized LLM inference engine
  • Unified support for text and multimodal models
  • VisionAddOn modular image embedding system
  • Native integration with LM Studio runtime
  • High-performance Apple Silicon acceleration
  • Standalone demo and Python environment support


Programming Language: TypeScript.
Categories:
Machine Learning

Page navigation:

©2024. Winfy. All Rights Reserved.

By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.