|
시장보고서
상품코드
2017481
지능형 차량 신기술 응용 분석(2025-2026년)Intelligent Vehicle New Technology Application Analysis Report, 2025-2026 |
||||||
서문
2025-2026년 업계의 하이라이트 중 하나로 새로운 센서 설계가 눈에 띕니다. LiDAR, 레이더, 카메라뿐만 아니라 청각 센서, 가스 센서, 기타 센서에서도 혁신적인 응용 사례를 많이 볼 수 있습니다.
LiDAR 부문
Huawei, Hesai, RoboSense 등은 L3/L4 수준의 자율주행 요건을 충족시키기 위해 멀티채널 LiDAR를 출시하였습니다.
Leishen Intelligent System의 광섬유 LiDAR는 1,550nm 파이버 레이저를 채택하여 최대 감지 거리는 1,500m, 정확도는 ±5cm입니다.
화웨이의 위상배열 LiDAR는 여러 파장대 간 원활한 전환과 복잡한 도로 상황의 실시간 추적을 지원하여 감지 정확도를 30% 향상시켰습니다.
Fortense의 전고체 광학 스캐닝 기술은 상용화 단계에 접어들었고, 광 이용 효율을 업계 평균의 약 10%에서 80% 이상으로 끌어올렸습니다.
레이더 부문
4D 레이더는 여전히 주목받고 있습니다. sinPro, Starleading, Aptiv, Mobileye 등의 업체들이 4D 레이더 제품을 출시하여 감지 거리를 300-400미터로 확대하고, 3D 인식, 투과 성능, 윤곽 프로파일링, 정지된 소형 표적의 감지 능력을 강화하고 있습니다.
5D 레이더는 목표물 추적 및 분류에 있어 보다 정확하고 안정적인 인식을 실현하고, 저속 차량을 고정된 표적으로 오분류하거나, 대형 트럭을 여러 대의 차량으로 오인하거나, 도로를 횡단하는 보행자를 놓치는 등 자율주행 용도에서 4D 이미징 레이더의 문제를 해결합니다. 문제를 해결합니다. MuNiu Technology와 아일랜드의 Provizio는 4D 레이더에 '미세한 움직임'의 차원을 추가하여 5D 레이더의 적용을 가능하게 하고 있습니다.
카메라 부문
생물의 시각 시스템에서 영감을 얻은 바이오닉 카메라는 더 넓은 시야와 더 깊은 시각적 인식을 실현합니다. KAIST(Korea Advanced Institute of Science and Technology), 도쿄과학연구소(Institute of Science Tokyo) 등의 기관이 개발에 집중하고 있습니다.
3층 적층 CMOS LOFIC 기술, 테라헤르츠 비전 센서, 적외선 열화상 시스템, 시각 및 LiDAR 융합 센서 등은 새로운 구조 설계를 통해 시각 시스템의 동적 범위, 판독 속도, 감지 범위, 정확도를 향상시킵니다.
또한, 청각 센서와 가스 및 입자 센서는 소리, 가스(예: CO2/CO) 또는 입자를 모니터링하여 다차원적인 데이터 수집을 가능하게 하고, 자동 운전, 차량 고장 경고, 어린이 존재 감지, 차량 내 공기질 모니터링 등의 기능을 강화합니다.
바이오닉 카메라는 생물의 시각 시스템을 모방하여 더 넓은 시야각과 더 깊은 지각을 실현합니다. 첨단 시각 시스템은 자율주행, 드론, 로보틱스 등의 분야에 적용되어 이미지의 정확도 향상에 기여할 것으로 기대되고 있습니다. 2025-2026년, 일본과 한국의 연구기관은 바이오닉 카메라의 연구개발을 지속하고 있습니다.
본 보고서는 중국 자동차 산업을 조사 분석하여 2025년과 2026년 1분기 스마트 콕핏, 스마트 드라이빙, 차체 및 섀시, 에너지 및 파워트레인 등 4개 부문의 신제품 및 기술 동향에 대한 정보를 제공합니다.
New Technology Research: Innovative Products such as Bionic Cameras, Vision-LiDAR Fusion Sensors, Auditory Sensors Further Enhance Vehicle Perception Capabilities
Foreword
ResearchInChina released the New Intelligent Vehicle Technology Application Analysis Report, 2025-2026. It combs through new products and technology trends in four major sectors-intelligent cockpit, intelligent driving, body & chassis, and energy & powertrain-in 2025 and Q1 2026. It summarizes representative emerging technologies and innovative applications and extracts hundreds of industry characteristics.
New sensor design stands out as one of the industry highlights during 2025-2026. Innovative applications abound in LiDAR, radar and cameras, as well as auditory, gas and other sensors.
LiDAR Sector:
Huawei, Hesai, and RoboSense among others have launched multi-channel LiDARs to meet L3/L4 autonomous driving requirements.
Leishen Intelligent System's fiber-optic LiDAR adopts 1550nm fiber laser, with a maximum detection range of 1500 meters and precision of +-5cm.
Huawei's phased-array LiDAR supports seamless switching between multiple bands and real-time tracking of complex road conditions, improving detection accuracy by 30%.
Fortsense's all solid-state optical scanning technology has entered the productization phase, boosting light utilization efficiency from the industry average of ~10% to over 80%.
Radar Sector:
4D radar remains a hot spot. sinPro, Starleading, Aptiv, Mobileye and others have launched 4D radar products, extending detection range to 300-400 meters and enhancing 3D perception, penetration, contour profiling and static small-target detection capabilities.
5D radar delivers more accurate and stable recognition in target tracking and classification, solving sore points of 4D imaging radar in intelligent driving applications, e.g., misclassifying slow vehicles as stationary targets, falsely identifying large trucks as multiple vehicles, and missing pedestrians crossing roads. MuNiu Technology and Ireland's Provizio adds a "micro-motion" dimension to 4D radar to enable 5D radar applications.
Camera Sector:
Inspired by biological vision systems, bionic camera achieves wider field of view and deeper visual perception. Institutions including Korea Advanced Institute of Science and Technology (KAIST) and Institute of Science Tokyo focus on developing it.
Three-layer stacked CMOS LOFIC technology, terahertz vision sensors, infrared thermal imaging systems, vision-LiDAR fusion sensors and so on improve dynamic range, readout speed, detection range and accuracy of vision systems through novel structural designs.
In addition, auditory sensors and gas/particle sensors monitor sound, gases (e.g., CO2/CO) or particles, enabling multi-dimensional data collection and enhancing functions such as intelligent driving, vehicle fault warning, child presence detection and in-cabin air quality monitoring.
Bionic cameras mimic biological vision systems to achieve wider FOV or deeper perception. Advanced vision systems are expected to be applied in autonomous driving, drones, robotics and other fields to improve image accuracy. During 2025-2026, Japanese and South Korean research institutions continue R&D of bionic cameras.
In 2025, the Korea Advanced Institute of Science and Technology (KAIST) announced a new bionic camera based on the insect compound-eye structure, applicable to high-speed motion capture, security surveillance, mobile device cameras and other fields.
Performance Features:
Ultra-high frame rate: 9120 frames per second
Excellent low-light imaging: capture objects up to 40 times dimmer than those detectable by conventional high-speed cameras
Ultra-slim profile: thickness of <1mm, easy to integrate into various systems
Technical Features:
Employs a compound-eye-like structure that allows for the parallel acquisition of frames from different time intervals.
Uses multiple optical channels and temporal summation to boost signal-to-noise ratio by accumulating light over overlapping periods.
Introduces a "channel-splitting" technique, achieving frame rates thousands of times faster than those supported by the image sensors used in packaging.
Applies "compressive image reconstruction" algorithm to eliminate blur caused by frame integration and reconstruct sharp images.
In 2025, a research team at Institute of Science Tokyo, inspired by insect antenna wind-sensing mechanism, developed bionic wind sensing technology. The technology mimics insect receptors, converting airflow pressure changes into electrical signals to calculate wind direction and speed, and enhances performance using multi-segment antenna-like structures. It features high sensitivity, small size and low power consumption, suitable for meteorological monitoring, drone flight and other applications.
High-precision wind detection: uses micro strain sensors and a convolutional neural network (CNN), achieving up to 99.5% wind direction accuracy, and an 85.2% accuracy even with short data length (0.2 flapping cycles).
Multi-sensor collaboration: multiple strain sensors (e.g., 7 strain gauges) are installed on the biomimetic flexible wing. Through multi-point strain data acquisition and machine learning algorithms, the accuracy and stability of wind direction classification are significantly improved.
Lightweight & low-cost: traditional flow sensing devices are difficult to apply to small aerial robots due to weight and size limitations. This technology utilizes low-cost commercial components (such as strain gauges) and simple wing strain sensing to achieve efficient wind direction classification without the need for additional specialized equipment.
The stacked CMOS image sensor currently used by Sony has a two-layer structure: the upper layer is a photosensitive pixel array (photodiode layer), and the lower layer is a logic circuit layer (responsible for image processing).
Sony also has long been committed to adding a third layer to stacked CMOS image sensors, aiming to further improve performance in dynamic range, sensitivity, noise control, readout efficiency, speed and resolution, especially in video performance, breaking through the current processing bottleneck of high-resolution video recording.
At CES 2026, U.S.-based Teradar unveiled its flagship Terahertz vision sensor: Teradar Summit(TM). It is the world's first long-range, high-resolution sensor designed for high performance in any type of weather, filling a critical gap left by legacy radar and lidar sensors.
Features of Teradar Summit Terahertz Vision Sensor:
Architecture: Solid state digital phased array
Range: 300m
Native Resolution: 0.13°
Point Cloud: 3D + Doppler
4D Measurement: Range, Azimuth, Elevation, and relative velocity
Autonomous Vehicle Compatibility: L2 - L5
Weather Performance: Day, Night, Fog, Rain, Snow, Sleet, Dust
Benefits of Tapping the Terahertz Band:
Terahertz waves lie between the electromagnetic spectrum used by radar (microwave) and lidar (infrared). Their unique wavelength characteristics give them high resolution and good penetration under specific conditions (such as dry air, short distances, and non-polar obstructions).
Teradar's Modular Terahertz Engine (MTE) is an all-solid-state sensor platform built on proprietary transmit (TX), receive (RX), and core processing chips, which deliver crystal-clear vision, detect small objects at great distances, and maintain uncompromised reliability in any environment - day or night, in rain, fog and snow.
The Summit Terahertz Vision Sensor will be priced between radar and lidar, expected to be a few hundred US dollars, offering a price advantage.
Summit's unique ability to deliver reliable, high quality data to AD/ADAS has attracted Tier1s and automotive OEMs around the world. Currently in eight development partnerships across the U.S. and Germany, Teradar will begin bidding on high volume production programs in 2026, targeting start of production (SOP) in 2028.
In 2025, Kyocera launched the world's first "camera-LiDAR" fusion sensor. The sensor achieves zero-parallax real-time data integration via optical axis alignment. Featuring high resolution and durability, it is applicable to autonomous driving, robot navigation, smart security and other fields.
Features:
High resolution (world's highest laser irradiance density: 0.045°): With an irradiance density of 0.045°, it utilizes the Company's proprietary laser scan unit technology from MFPs and printers, making it possible to detect a 30 cm falling object at a distance of 100 m.
High durability with proprietary MEMS mirror: A proprietary MEMS mirror, developed with Kyocera's advanced manufacturing and ceramic package technologies, and high-resolution laser scanning technology, support high-precision sensing for various industries including autonomous vehicles, marine/ships, heavy machinery, and more.
Support for customized solutions: Each element is developed and manufactured by Kyocera for total control and customization, from MEMS mirrors to optical systems, electrical circuits, and software.
In September 2025, Fuyao's in-cabin laser-vision fusion solution made a debut on the new AITO M7.
Fuyao's "Fused Intelligent Driving Front Windshield" deeply integrates LiDAR and camera sensors into the front windshield glass, centered on "in-cabin integration". It solves industry challenges of LiDAR signal attenuation caused by curved glass via innovative materials and precision processes, achieving high transmittance of near-infrared light and delivering a simpler, more stable and reliable perception solution for intelligent driving systems.
Current mainstream autonomous vehicles relying on pure vision or vision-radar fusion generally lack recognition of critical external sound events (sirens, bicycle bells, etc.), creating perception blind spots. Equipping vehicles with "hearing" using acoustic sensors and AI algorithms to address this gap has become a clear industry evolution path and entered the prototype development and testing phase.
In September 2025, the Fraunhofer Institute for Digital Media Technology (IDMT) launched the "Hearing Car" project, integrating microphone arrays and AI to complement key capabilities missing from traditional perception systems.
Technical Composition:
Hardware: high-sensitivity microphone arrays integrated into the vehicle body or windshield.
Software: AI algorithms classify and recognize specific sounds (ambulance sirens, bicycle bells, children's shouts, etc.) and link to vehicle control systems.
Interactive display: the windshield shows warnings (e.g., "ACHTUNG! SIRENE ERKANNT" in German, namely, "WARNING! SIREN DETECTED").
Application Scenarios:
Blind-spot detection: detects sounds (e.g., bicycles and children from alleys) in visual blind spots.
Emergency response: automatically yields to emergency vehicles (ambulances), adjusts path or pulls over.
Human-machine collaboration: serve as an advanced safety assistance function (e.g., alerting the human driver) in the non-fully automated driving stage.
Overview of New Intelligent Cockpit Technology Application Characteristics
Overview of New Intelligent Driving Technology Application Characteristics
Overview of New Vehicle Body/Chassis/Network Communication Technology Application Characteristics
Overview of New Energy/Powertrain Technology Applications
Overview of Other New Technology Applications