BEIJING, Nov. 9, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that it used multi-modal data to compensate for the lack of single modal data, a semantic segmentation method based on multi-modal data fusion was proposed to improve the accuracy of semantic segmentation. Multi-modal data fusion refers to the fusion of data from different sensors or modalities to provide more comprehensive and accurate information.
Multi-modal data fusion is of great significance in semantic segmentation, in which multi-modal data fusion can make use of data from different sensors or modalities, and by integrating information from different modalities, it can make full use of the advantages of data from different modalities to provide a more comprehensive and enriched representation of the features, to obtain a more comprehensive understanding of the scene, and to improve the accuracy of semantic segmentation. For example, in semantic segmentation, both RGB images and depth images can be used as input data. RGB images provide color and texture information, while depth images provide object geometry and distance information. By fusing the information from these two modalities, the semantic categories of the objects in the image can be better understood and segmentation can be performed more accurately.
In addition, multi-modal data fusion can improve the accuracy of semantic segmentation. In real scenes, images may be affected by factors such as lighting changes, occlusion, noise, etc., leading to a decrease in the accuracy of single-modal data. By fusing data from multiple modalities, the influence of single-modal data by these interfering factors can be reduced, thus improving the stability of semantic segmentation and providing better support and solutions for related tasks in the field of computer vision.
Multi-modal data fusion technique is an important tool to improve the performance of semantic segmentation. Feature-level fusion, decision-level fusion, and other joint modeling methods can be used for multi-modal data fusion to improve the accuracy of semantic segmentation. In practical applications, choosing appropriate fusion methods and techniques, and adjusting and optimizing them according to specific tasks and data characteristics will help to improve the effect of semantic segmentation and provide more possibilities for the further development and application of semantic segmentation tasks.
WiMi employs data pre-processing, feature extraction, data fusion, and segmentation model training to achieve semantic segmentation for multi-modal data fusion. Firstly the data collected from different sensors needs to be pre-processed, this includes operations such as data normalization, denoising and enhancement to improve the quality and usability of the data. Next, features will be extracted from the data of each sensor. For image data, a convolutional neural network (CNN) can be used to extract the feature representation of the image; for text data, a word embedding model can be used to transform the text into a vector representation. Then on the basis of feature extraction, the features of data from different sensors will be integrated. Finally, the integrated features will be used to train the semantic segmentation model.
The semantic segmentation for multi-modal data fusion is of great significance in many fields, including computer vision, natural language processing, and intelligent interaction. However, there are still some challenges and problems in this field that require further research and exploration. Semantic segmentation based on multi-modal data fusion still has a lot of room for development in future research, and by solving the problems of multi-modal data fusion and improving the efficiency and accuracy of the algorithm, the development and application of semantic segmentation can be further promoted.
In the future, WiMi will further explore more advanced multi-modal data fusion technology, such as joint modeling of images, text, and more complex semantic segmentation models. In addition, WiMi also applies semantic segmentation for multi-modal data fusion to a wider range of fields, such as medical image analysis, intelligent transportation, etc., in order to solve real-world problems and promote the development of science and technology.
About WIMI Hologram Cloud
WIMI Hologram Cloud, Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.
Safe Harbor Statements
This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.
Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.