![]() |
시장보고서
상품코드
1797625
중국의 지능형 콕핏 특허 분석(2025년)Intelligent Cockpit Patent Analysis Report, 2025 |
중국에서는 매년 약 5,000건의 지능형 조종석 특허가 공개되고 있습니다.
2025년 5월 31일 현재 중국 내 지능형 조종석 특허는 총 5만 3,673건이며, 그 중 2015년 이후 특허는 4만 4,267건입니다. 2021년 이후 매년 약 5,000건의 특허가 새롭게 공개되고 있습니다.
특허 유형에서 지능형 조종석 특허는 조종석 상호작용, 인포테인먼트의 두 가지 범주에 집중되어 있습니다. 또한, 스마트 시트, 디지털 키, 조종석 디스플레이도 공급망 기업의 중요한 연구 분야입니다.
출원인 유형별로는 공급업체가 전체 특허의 57.1%로 가장 많았고, 2위는 OEM이 17.0%, 3위는 대학/연구기관이 13.0%를 차지했습니다.
각 회사의 특허 레이아웃을 보면 다음과 같은 특징이 있습니다.
Baidu와 PATEO와 같은 종합 조종석 공급업체는 인포테인먼트와 조종석 인터랙션 사업을 전문으로 하고 있으며, 일반적으로 인포테인먼트 특허가 50개 이상, 조종석 인터랙션 특허가 20개 이상입니다.
FUTURUS, Jingwei Hirain 등은 조종석 디스플레이의 레이아웃에 중점을 두고 있습니다.
Huawei, Shenzhen Yinwang Intelligent Technology 등은 스마트 시트 등 지능형 인테리어 레이아웃에 중점을 두고 있습니다.
OEM의 특허 레이아웃에는 다음과 같은 특징이 있습니다.
인포테인먼트와 조종석 상호 작용은 지능형 조종석의 기초로서 각 OEM의 연구 개발의 초점이 되고 있습니다.
FAW, Changan Automobile, Dongfeng Motor는 많은 조종석 디스플레이 특허를 출원하고 있습니다. 이 회사는 AR HUD, 3D 디스플레이, 홀로그램 이미지 등의 기술을 사용하여 몰입감 있는 조종석 경험을 구현하고 있습니다.
Geely, Chery, FAW, Dongfeng 등은 승차감을 향상시키는 스마트 시트 연구개발에 많은 투자를 하고 있으며, 일반적으로 50개 이상의 특허를 보유하고 있습니다.
편리한 기능으로, 디지털 키는 Changan, Chery, Geely, BESTUNE 등 여러 OEM에 의해 경쟁적으로 배치되어 있습니다.
본 보고서는 지능형 조종석 특허에 대한 조사 분석, 특허 수, 특허 분포, 주목받는 특허 기술 지도, 특허 수에 따른 OEM 및 공급업체 순위 등의 정보를 제공합니다.
Patent Trend: Three Major Directions of Intelligent Cockpits in 2025
This report explores the development trends of cutting-edge intelligent cockpits from the perspective of patents. The research scope covers 8 categories of products, i.e., new cockpit display, cockpit interaction, in-cabin monitoring, infotainment, digital key, smart seat, smart speaker, and panoramic sunroof.
This report obtains a relatively effective patent dataset for intelligent cockpits by way of keyword search, data cleaning, and duplicate checking in different segments, and conducts cross-over analysis to obtain the characteristics of patents in the intelligent cockpit field in current stage, involving number of patents, patent distribution, hot patent technology maps, ranking of OEMs and suppliers by number of patents, tracking of featured patents, and development trends of each segment.
About 5,000 Intelligent Cockpit Patents Are Published in China Every Year
As of May 31, 2025, the intelligent cockpit patents in China have numbered 53,673 in total, including 44,267 patents being since 2015. Since 2021, around new 5,000 patents have been published each year.
In terms of patent types, intelligent cockpit patents are concentrated in two categories: cockpit interaction and infotainment. In addition, smart seat, digital key, and cockpit display are also key research areas for supply chain companies.
In terms of applicant types, suppliers are the largest holders of intelligent cockpit patents, accounting for as high as 57.1% of the total number of patents; OEMs rank second, making up 17.0%; universities/research institutions take a 13.0% share.
The patent layout of suppliers shows the following characteristics:
Comprehensive cockpit suppliers such as Baidu and PATEO specialize in infotainment and cockpit interaction businesses, generally with over 50 infotainment patents, and over 20 cockpit interaction patents.
FUTURUS and Jingwei Hirain among others focus on layout of cockpit display.
Huawei, Shenzhen Yinwang Intelligent Technology and more place emphasis on layout of intelligent interiors such as smart seats.
The patent layout of OEMs shows the following characteristics:
Infotainment and cockpit interaction, as the foundation of intelligent cockpits, have become the R&D focus of various OEMs.
FAW, Changan, and Dongfeng have applied for more patents in cockpit displays. They use technologies such as AR HUD, 3D display, and holographic images to create immersive cockpit experiences.
Geely, Chery, FAW, Dongfeng, etc. have invested more in R&D of smart seats to improve ride comfort, generally with over 50 patents.
As a convenient function, digital keys have been competitively laid out by multiple OEMs such as Changan, Chery, Geely, and BESTUNE.
Intelligent cockpit acts as the core carrier for users to perceive vehicles. In its future development, more attention will be paid to technology integration, user experience, and ecosystem construction.
Technology Integration: Intelligent cockpits are deeply integrated with technologies such as autonomous driving, the Internet of Vehicles, and AI, gradually breaking down the computing power barriers (e.g., cockpit-driving integration), time-space barriers (e.g., the Internet of Vehicles, satellite communication, V2X), and function barriers (e.g., scenario linkage), realizing functional collaboration, and enabling full-scenario coverage of cockpit services.
Enhanced User Experience: User experience gets improved by personalized and customized services and scenario-based applications. For example, accurate user identification is achieved through multi-modal perception, and the cockpit environment is adjusted according to the user's status and emotion. Based on user profile and historical data, proactive content and service recommendations are provided to achieve a personalized and customized experience. For the functional requirements in different scenarios such as rest mode, sightseeing mode, office mode, and child-care mode, multiple functions are linked to achieve convenient operation and an immersive user experience.
The development of AI has achieved fruitful results in both intelligent driving and intelligent cockpit. However, the current isolated design cannot integrate different sensors, and there is still a long way to go for cross-domain integration. In current stage, the integration of intelligent cockpit and intelligent driving mainly stays at the simple domain information interaction level, where intelligent driving scenarios need to be displayed through the cockpit, or driving and riding instructions are transmitted through the cockpit. Baidu and Changan among others have taken important steps towards the integration of intelligent cockpit and intelligent driving based on business needs.
Case 1: Baidu's Patent on the Integration of Autonomous Driving and Intelligent Cockpit
In July 2025, Baidu disclosed a patent: Method, Device, Equipment and Medium for Determining Passenger Status Based on Unmanned Driving.
Patent Number: CN120308129A
Characteristics: This method is applied to unmanned vehicles. It determines whether passengers have gotten off the vehicle according to multi-dimensional environmental monitoring information (door, seat, vision) obtained from intelligent cockpit sensors, thereby improving the accuracy of judging whether the passenger has got off the vehicle.
Case 2: Changan's Data Transmission Method for Intelligent Driving and Intelligent Cockpit
In January 2025, Changan Automobile announced a patent: Intelligent Driving Data Transmission Method, Device, Intelligent Cockpit System and Vehicle.
Patent Number: CN119636612A.
Characteristics: It includes receiving intelligent driving data based on an external Ethernet interface, and receiving demand information of each functional application in the intelligent cockpit system based on an internal Ethernet interface. Finally, it realizes the internal data distribution of the intelligent cockpit system based on Ethernet, supporting real-time data interaction for functions such as navigation, entertainment, and driving assistance.
Intelligent cockpit is evolving from "function stacking" to "cognitive hub". The deep integration with AI technology is reconstructing the human-vehicle interaction model. With the application of AI foundation models, the intelligent, proactive, and emotional experience of cockpit interaction is further enhanced. Intelligent cockpits tend to offer simpler functions and easier operation with the support of AI foundation models.
Case 1: XPeng's Patent on Vision-Language Model
In July 2025, XPeng disclosed a patent: Request Processing Method, Device, Equipment and Medium Based on Vision Language Model.
Patent Number: CN120259425A
Technical Features: It realizes in-depth understanding of graphic and text information, and task execution through a vision-language model (VLM), aiming to solve the problem of low recognition accuracy of target objects (such as icons, characters, and wireframes) on the IVI screen, especially in recognition of small targets and complex-feature objects. Its core value lies in the deep adaptation of the vision-language model to IVI scenarios, providing a feasible technical solution for precise interaction in intelligent cockpits and promoting the upgrade of vehicle systems from "passive response" to "precise understanding".
Case 2: iGentAI Computing's Intelligent Cockpit System Based on AI Foundation Model
In July 2025, iGentAI Computing disclosed a patent: Vehicle Intelligent Cockpit Terminal and System Based on AI Foundation Model.
Patent Number: CN120096602B
Technical Features: Its core is to analyze multi-dimensional sensor data in real time through an AI foundation model, predict dangerous states in advance, and trigger hierarchical emergency measures to improve vehicle safety performance.
The sensor data collected in real time in this patent includes:
Environmental data: interior and exterior vehicle temperature (accuracy: +-0.5°C), smoke concentration, ambient noise;
Mechanical data: battery internal pressure, vehicle body deformation (in millimeters), collision acceleration (tri-axis G-value);
Visual data: postures of in-vehicle occupants collected by cameras, images of external obstacles (resolution: 1280X720, 15fps).
The data is collected via the CAN/LIN bus at a 100ms cycle. After filtering and normalization, it is concatenated with the previous 5 seconds of data to form a temporal sequence, providing historical context for model prediction.
Scenario-based application is one of the core development directions of intelligent cockpits. Its essence is to transform the vehicle into a "third living space" that can actively perceive, understand, and respond to user needs through the collaboration of software and hardware.
The scenario-based functions of intelligent cockpits mainly involve three aspects: Firstly, scenario definition: by combining users' functional requirements with real-time scenarios, provide active control of the hardware system; secondly, dynamic scenario switching: support users to customize trigger conditions (e.g., temperature and time) and automatically link hardware modules like air conditioner and seat; thirdly, emotional interaction: through fuzzy intention recognition (e.g., "It's stuffy in the car") and edge learning, realize natural interaction in specific scenarios.
Case 1: Foryou's Intelligent Cockpit Control Method Based on Scenario Mode
In May 2025, Huayang disclosed a patent: A Method for Controlling An Automobile Intelligent Cockpit Based on A State Machine.
Patent Number: CN120029128A
Technical Features: According to users' functional requirements, cockpit scenarios are divided into 7 categories (vehicle dormancy, regular vehicle use, remote control, OTA update, sentry mode, charging/discharging, and life detection), and 5 corresponding target working modes are defined. Each mode specifies the power supply state, network state, and functional system state, enabling the cockpit system to operate flexibly, efficiently, and with low power consumption in different scenarios.
Case 2: SERES' Scenario Classification Model Training Method
In February 2025, SERES disclosed a patent: Scenario Classification Model Training Method, Device, Computer Equipment and Storage Medium.
Patent Number: CN119399532A
Technical Features: Focusing on the viewing scenario in intelligent cockpits, dynamically adjust the air supply direction of the air conditioner by training a scenario classification model to recognize wind direction scenarios in videos, so as to enhance the immersive in-vehicle viewing experience.
Case 3: NIO's Multimedia Content Interaction Based on Vehicle Cockpit
In June 2025, NIO disclosed a patent: Multimedia Content Interaction Method, Device, Equipment and Storage Medium Based on Vehicle Cockpit.
Patent Number: CN120191303A
Technical Features: For scenarios such as audio-visual media, interactive games, and music playback, connect multimedia content with vehicle hardware through an interface encapsulation platform system, constructing a real-time "perception-decision-execution" interaction process. For example, when playing a movie, the system automatically dims the lights, adjusts the seat angle, and releases fragrances corresponding to the scenario via the air conditioner (e.g., fresh fragrance for ocean scenario).
The hardware that can be called in this patent includes:
Suspension system: Simulate motion effects (e.g., vibration and tilt) in videos/games, with parameters including response intensity and frequency.
Air conditioning system: Adjust temperature, air volume, and air direction according to scenarios.
Ambient light system: Synchronize with screen brightness, color, or audio rhythm.
Fragrance system: Release scents matching the scenario (e.g., forest and restaurant).
Audio system: Spatial sound positioning to enhance immersion.
Seating system: Trigger massage and heating/ventilation functions according to content.
Steer-by-wire system: Simulate steering wheel feedback during game interaction (decoupled from tire steering to ensure safety).