Version Overview
Version Change History
New Features
Multi-core scheduling: Anti-affinity scheduling and multi-dimensional resource scoring are implemented at the cluster level based on service types. It improves container deployment density by 10% with the performance deterioration less than 5%.
Inherited Features
Table 1 Inherited features of openFuyao v25.06
| Feature | Change Description |
|---|---|
| Installation and Deployment | 1. Optimized the installation and deployment time: The bootstrap node and management cluster are combined to improve the efficiency of Cluster API installation. The time required from installation start to the service cluster availability is reduced by 40% (if no interruption occurs). 2. Extended the operating system compatibility: The Cluster API installation package now supports openEuler 20.03 LTS, openEuler 20.03 LTS SP3, and openEuler 24.03 LTS SP1. 3. Added the verification module: A verification module is added to verify information when users add nodes to an existing cluster and create a new cluster. This can prevent installation failures caused by incorrect node information. 4. Optimized the cluster status sorting: The cluster status determination mechanism of the cluster lifecycle management panel is optimized to cover most ambiguous situations. |
| Colocation | 1. Defined the three-QoS-level hierarchical scheduling: Three QoS levels (HLS, LS, and BE) are introduced in colocation scenarios to ensure service quality. The QoS-level-based scheduling priority is defined to refine resource scheduling. 2. Added the watermark-based eviction: The Rubik-based node colocation engine is added to support the configuration of CPU and memory watermark-based service eviction. 3. Enhanced scheduling based on the actual resource load of nodes: The system detects the CPU and memory load of nodes in the cluster and preferentially schedules pods to light-load nodes to balance the load between nodes. This prevents application or node faults due to the overload of a single node. 4. Enhanced the NUMA-aware scheduling: NUMA-aware scheduling is optimized for LS-level pods. 5. Added the colocation monitoring: Cluster-level and node-level monitoring of CPU and memory usage and requests for workloads with different QoS levels is provided. |
| openFuyao Ray | Added the Ray monitoring panel to allow users to view cluster health information. |
| NPU Operator | Added support for installation and deployment in offline scenarios. |
Removed Features
None.
Version Features
This is the second version of openFuyao. Its main functions and features are listed in Table 2. For details about the functions and features, see User Guide.
Table 2 Functions and features of openFuyao
| Category | Feature | Description |
|---|---|---|
| Basic platform functions | Installation and Deployment | Integrates with the standard Cluster API installation and deployment tool and supports quick service cluster deployment. The management cluster provides interactive service cluster deployment capabilities in multiple scenarios on the unified management plane, including single-node or multi-node installation (including installation in HA mode), online or offline installation, cluster scaling, and in-place Kubernetes upgrades. |
| Management Plane | Provides an out-of-the-box console and supports functions such as application management, application market, extension management, resource management, repository management, monitoring, alerting, user management, and command line interaction. | |
| Authentication and Authorization | The built-in OAuth 2.0 server supports the OAuth 2.0 protocol and functions such as application authentication, authorization, password reset, and password policy. It also provides a unified authentication and access solution for frontend and non-frontend interface applications. | |
| User Management | Provides cross-cluster multi-user management and enables platform-level and cluster-level users to be bound to roles such as administrators, operators, and observers. | |
| Multi-Cluster Management | Upgrades the current cluster to a management cluster to implement multi-cluster management. | |
| Command Line Interaction | Provides the web terminal function on the cluster management plane based on the command line interface (CLI) so that cluster administrators can easily manage clusters on the console using backend kubectl commands. | |
| Component installation management | Application Market | Allows users to browse, search for, and deploy Helm-based extensions and applications, and provides computing acceleration suites to unleash the computing power. |
| Application Management | Integrates the Helm v3 application package manager to quickly deploy, upgrade, roll back, and uninstall applications. You can view Helm chart details, resources, logs, events, and monitoring information. | |
| Repository Management | Provides a built-in Harbor repository for uploading and managing Helm charts. You can add and remove remote Harbor repositories and synchronize Helm charts from remote Harbor repositories. | |
| Extension Management | Implements a dynamic pluggable framework developed based on ConsolePlugin CRD; supports seamless integration of extensions' frontend interfaces into the openFuyao management plane; supports quick deployment using Helm charts and realizes operations such as quick upgrade, rollback, and enablement and disablement of frontend interfaces as well as uninstallation. In addition, extensions can be easily connected to the authentication and authorization system of the platform to ensure security and implement plug-and-play of components. | |
| Kubernetes native resource management | Resource Management | Covers all core resources and custom resource definitions in Kubernetes, facilitating user management (adding, deleting, querying, and modifying). |
| Events | Reflect changes in native Kubernetes resources, such as pods, deployments, and StatefulSets. | |
| RBAC Management | Allows users to set service accounts, roles, and role bindings to implement permission control on cluster resources. | |
| Computing power scheduling optimization | Colocation | Supports hybrid deployment of online and offline services. During peak periods of online services, resource scheduling is prioritized to guarantee online services over offline services. During off-peak periods of online services, offline services are allowed to utilize oversold resources. This improves cluster resource utilization by 30% to 50%, with minimal QoS impact and a jitter ratio of less than 5%. |
| NUMA-aware Scheduling | Implements cluster-level and node-level NUMA topology awareness and performs NUMA-aware scheduling for applications based on NUMA affinity to improve application performance. The average throughput is improved by 30%. For example, the performance of Redis is improved by an average of 30%. | |
| Multi-Core Scheduling | Implements service type–based anti-affinity scheduling and multi-dimensional resource scoring at the cluster level. The container deployment density is improved by 10% with less than 5% performance degradation. | |
| Ray | Provides Ray solutions with high usability, high performance, and high computing power utilization in cloud-native scenarios. Supports full lifecycle management of Ray clusters and jobs, reduces O&M costs, enhances cluster observability, fault locating, and optimization, and implements efficient computing power scheduling and management. | |
| Automatic hardware management | KAE Operator | Implements minute-level automatic management of Kunpeng KAE hardware, including KAE hardware feature discovery as well as automatic management and installation of components such as drivers, firmware, and hardware device plug-ins. KAEs can be deployed and get ready within 5 minutes. |
| NPU Operator | Implements minute-level automatic management of Ascend NPU hardware, including NPU hardware feature discovery as well as automatic management and installation of components such as drivers, firmware, hardware device plug-ins, metric collection component, and cluster scheduling component. NPUs can be deployed and get ready within 10 minutes. | |
| Observability | Monitoring | Provides out-of-the-box metric collection and visualized display capabilities, supports monitoring of resources such as clusters, nodes, and workloads, and provides out-of-the-box monitoring dashboards. |
| Custom Monitoring Dashboards | You can customize metrics to be monitored based on service requirements to implement accurate data observation and analysis. | |
| Logging | Collects various types of logs in a cluster, allows users to view and download logs, and reports alerts based on preset alert rules. | |
| Alerting | Monitors cluster statuses and triggers alerts when specific conditions are met. In this way, problems can be detected in a timely manner, and necessary measures can be taken to ensure system stability and reliability. |