Current location - Loan Platform Complete Network - Foreign exchange account opening - Those things in 2021|Breaking down the 4 major fields of information technology
Those things in 2021|Breaking down the 4 major fields of information technology

In 2021, the development of information technology will advance by leaps and bounds. Artificial intelligence, big data, open source, virtual reality (VR), augmented reality (AR)... the development in almost every field is remarkable.

In the field of artificial intelligence, the basic capabilities of artificial intelligence's large language models, large graphics and text models, and even large multi-modal models have been fully demonstrated. For example, Alibaba Damo Academy announced the latest progress of the multi-modal large model M6, with parameters jumping from trillions to 10 trillion; Pengcheng Laboratory and Baidu jointly released the world's first knowledge-enhanced 100 billion large model - Pengcheng-Baidu ·Wenxin, the parameter scale reaches 260 billion.

Not only that, the intersection of artificial intelligence and other scientific fields has also sparked sparks. On the 2021 scientific breakthrough list recently announced by Science, AlphaFold and RoseTTA-fold, two technologies based on artificial intelligence to predict protein structure, ranked first.

In the field of human-computer interaction, when Zuckerberg changed the name of Facebook to "Meta", Tesla and SpaceX CEO Elon Musk focused on brain-computer interfaces . Musk believes that brain-computer interface devices will be more likely to change the world and help people with quadriplegia or physical disabilities live and work better. "Sophisticated brain-computer interface devices can allow you to be completely immersed in virtual reality." In addition, in May this year, Stanford University developed an intracortical brain-computer interface system that can decode the imagined handwriting movements of paralyzed patients from neural activity in the motor cortex and convert them into text.

In the field of supercomputing, the most noteworthy thing is that in November this year, our country’s supercomputing application team won the highest international award in the field of high-performance computing applications for its “ultra-large-scale quantum random circuit real-time simulation” achievement. Gordon Bell Award".

In terms of open source, the RISC-V open source instruction set and its ecosystem are rising rapidly; the openEuler operating system open source community, led by Huawei and participated by the Institute of Software of the Chinese Academy of Sciences, Kirin Software, etc., has gathered 7,000 active members Developers have completed more than 8,000 self-maintained open source software packages, spawning commercial releases from more than 10 manufacturers...

Looking back in 2021, the Information Technology Edition invited industry experts to sort out the above four fields. development context and look forward to future development trends.

Author Zhang Shuanghu

AlphaFold may be the "big brother" in the field of artificial intelligence (AI) in 2021.

Recently, "Science" magazine announced the 2021 scientific breakthrough list, with AlphaFold and RoseTTA-fold, two technologies for predicting protein structures based on artificial intelligence, at the top of the list.

A few days ago, among the "Top Ten Global Engineering Achievements in 2021" selected by the Journal of the Chinese Academy of Engineering (major achievements in engineering science and technology that have been proven effective and globally influential in global practice in the past five years), AlphaGo and AlphaFold are also on the list.

In an interview with China Science News, several experts talked about AlphaFold when looking back at this year's achievements in the field of artificial intelligence.

“AlphaFold, which is geared towards scientific discovery, and the artificial intelligence development ecosystem that China is building cannot be ignored.” Wu Fei, director of the Artificial Intelligence Institute of Zhejiang University, told China Science News.

Wang Jinqiao, a researcher at the State Key Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, nominated "Using AI for COVID-19 Diagnosis", "Integration of Artificial Intelligence with Biology, Pharmaceuticals, Materials and Other Sciences (AI for Science)" and "Three Modal large model Zidong Taichu”.

In the medical field, AI recognition of cough sounds has long been used to detect diseases such as pneumonia, asthma, and Alzheimer's disease. Researchers at the Massachusetts Institute of Technology in the United States have developed an AI model that can identify COVID-19 patients by analyzing cough recordings. The accuracy of identifying the cough of COVID-19 patients is 98.5%, and the accuracy of identifying asymptomatic infections is as high as 100%. Recently, it was reported that this model has been used to identify the Omicron virus.

“Zidong Taichu has for the first time realized the unified expression of pictures, texts and sounds, and has both cross-modal understanding and generation capabilities.” Wang Jinqiao said, “Currently published together with Xinhua News Agency *** The all-media multi-modal large model research and development plan' realizes unified modeling of the understanding and generation of all-media data and creates a full-stack domestic media artificial intelligence platform. It has been exploratoryly applied to scenarios such as quality inspection in the textile industry and automobile industry. ”

On December 7, the official website of the Ministry of Science and Technology announced three letters to support the construction of national new generation artificial intelligence innovation and development pilot zones in Harbin, Shenyang, and Zhengzhou. So far, our country has 18 national new-generation artificial intelligence innovation and development pilot zones, which will lead the innovation and development of artificial intelligence in China.

“Our country is promoting the development of artificial intelligence ecology and building a good ecology.” Wu Fei said, “Currently, there are 15 countries’ new generation artificial intelligence development and innovation platforms and 18 countries’ new generation artificial intelligence innovation and development experiments. Districts, 8 artificial intelligence innovative application pilot areas and colleges and universities set up artificial intelligence undergraduate majors and interdisciplinary talent training carriers.

“The first is the large model, and the second is the combination of artificial intelligence and basic disciplines. " Sun Maosong told China Science News, "The basic capabilities of large language models, large graphics and text models, and even large multi-modal models have been fully demonstrated, confirming its status as a basic soft infrastructure for intelligent information processing. At the same time, it does not simply expand the scale, but poses challenges to both digital resource integration capabilities and computing capabilities. Although its limitations are also obvious, some of its 'peculiar' properties (such as few-sample learning, deep double descent, prompt-based task adjustment, etc.) have caused scholars to produce extremely large parameter scales that may lead to qualitative changes. expectations, thus paving the way for new breakthroughs. ”

This year, the field of artificial intelligence has moved from the stage of “refining large models” to the stage of “refining large models”, from the scale of hundreds of billions to the scale of trillions. In the field of large models, it seems that there is no biggest, only bigger .

In March, Beijing Zhiyuan Artificial Intelligence Research Institute released my country's first ultra-large-scale artificial intelligence model "Wudao 1.0". In June, Zhiyuan rewrote its own record and released Wudao 2.0 with parameter scale. Reached 1.75 trillion; in September, Inspur Artificial Intelligence Research Institute launched the Chinese massive language model - Source 1.0, with a parameter volume of 245.7 billion; in November, Alibaba Damo Academy announced the latest progress of the multi-modal large model M6, parameters Jumping from one trillion to 10 trillion; in December, Pengcheng Lab and Baidu jointly released the world's first knowledge-enhanced 100 billion large-scale model - Pengcheng-Baidu Wenxin, with a parameter scale of 260 billion.

Correspondingly, Kuaishou and ETH Zurich recently proposed a new recommendation system, Persia, which supports model training of up to 100 trillion parameters.

On the other hand, artificial intelligence is in the field of basic subjects.

In July, the research results of DeepMind’s artificial intelligence program Alphafold2 were ranked first in Nature. In the field of structural biology research, artificial intelligence may lead biology, medicine and pharmacy to new horizons; in November Researchers at the University of Southern California in the United States used brain-computer connection equipment to let monkeys play games and treadmills to conduct neural activity data research; in December, the machine learning framework developed by DeepMind has helped people discover two new developments in the field of pure mathematics. Conjecture, demonstrating the potential of machine learning to support mathematical research

“This year, artificial intelligence has also made great achievements in the application of various industries. "Sun Maosong said, "The combination of artificial intelligence and basic sciences has shown great potential, and many top-level papers have been published, which has revealed a strong trend, that is, 'artificial intelligence + basic science' has great potential. ”

Author Zhang Shuanghu

Brain-computer interface, AR glasses, smart voice, myoelectric bracelet, air gesture recognition... In 2021, from basic research to application implementation, human-machine The field of interaction is turbulent. Whether it is the booming development of smart health, metaverse, or autonomous driving, it seems that human-computer interaction is on the threshold of industrialization.

“The high-throughput we develop. The ultra-flexible neural electrode has passed scientific research and clinical ethics approval, and human clinical trials of brain-computer interface will be launched soon. "Tao Hu, deputy director of the Shanghai Institute of Microsystems, Chinese Academy of Sciences and deputy director of the Joint State Key Laboratory of Sensing Technology, told China Science News, "Safe and stable large-scale collection of neuron signals from the human brain and closed-loop control will be achieved. Restoration of patient's sensory and motor functions. ”

Brain-computer interface technology brings more and more conveniences to patients. In May this year, researchers from Stanford University published a cover paper in Nature and developed an intracortical brain-computer interface system. The imagined handwriting movements of a paralyzed patient can be decoded from neural activity in the motor cortex and converted into text. With the help of this system, the subject (paralyzed by spinal cord loss) can type nearly a hundred characters per minute with automatic corrections. The offline accuracy rate exceeds 99%.

Not long ago, Musk said that he hopes to use Neuralink's microchip device on humans next year. The chip will be used to treat spinal cord injuries, Parkinson's disease, etc. Brain diseases and neurological diseases. Currently, related technologies are awaiting approval from the U.S. Food and Drug Administration.

“The field of brain-computer interfaces has accumulated considerable technology and is expected to become a powerful tool for solving brain diseases. "Tao Hu said, "Everyone is seizing the opportunity for clinical application, and the technology may be implemented next year. It is expected that within two to three years, there will be unicorn companies in China that are comparable to Musk's Neuralink. ”

“Human-computer interaction will lead to a new trillion-level market. "This judgment by Yan Qun, a distinguished professor at Fuzhou University, also encompasses the huge market of the Metaverse.

Some people call 2021 the "First Year of the Metaverse", while others think it is just "new wine in old bottles" ". But in any case, the Metaverse has become an unavoidable topic in the field of human-computer interaction this year.

"The Metaverse is a synthesis of virtual reality, augmented reality and mixed reality. It is actually not a new thing. . "Liu Wei, director of the Human-Computer Interaction and Cognitive Engineering Laboratory of Beijing University of Posts and Telecommunications, told China Science News, "The metaverse is the future development direction for the real world and the virtual world, but there are still some technical problems that have not been well solved.

In the real world, the mixed problem of human-computer interaction problems and human-computer environment systems cannot be solved well. In real-world human-computer interaction, whether it is input, processing or output, Objective data, subjective information and knowledge still cannot be perfectly integrated.

Liu Wei believes that both human and machine decision-making have processes of "fast decision-making" and "slow decision-making" in both the real and virtual worlds. Relying more on logical decision-making and sometimes more on intuition, this kind of "hybrid decision-making" is constantly changing, and it is difficult to find the law of change.

"The metaverse has not yet been able to solve the problem. Still in the early stages of painting the cake. " Liu Wei said, "Because its underlying mechanism has not been solved - humans have not been able to perfectly solve the problem of human-computer interaction in the real world, and they cannot solve it in the metaverse. ”

When it comes to human-computer interaction, Liu Wei believes that the second issue that cannot be ignored is the “complex field.”

“This year’s Nobel Prize in Physics was also given to Producer of complex systems predictive climate change models. "Liu Wei said, "Human-computer interaction is also a complex system, which includes both repetitive problems and complex, cross-domain collaboration problems. ”

Liu Wei believes that from the perspective of intelligence, complex systems include three important components, one is people, the other is equipment (man-made objects), and the third is the environment. This is actually one of many things. The intertwined, intertwined and overlapping "human-machine-loop system" issues

"In human-computer interaction, machines are good at handling complex problems, while humans are good at managing complex problems. 'Things - cross-domain collaboration, balance between things, etc. Because people have not found the simple operating rules of complex things, to solve all problems of intelligent products and intelligent systems, we must find their combination, integration and interaction points in the system of man, machine and environment. Moreover, people should be in a dominant position in this system. "

The third phenomenon that attracted Liu Wei's attention in the field of human-computer interaction is that "artificial intelligence has helped mathematicians discover some laws." "Recently, DeepMind has developed a machine learning framework that can help mathematicians Discover new conjectures and theorems. "Liu Wei said, "Artificial intelligence is a basic mathematical tool, and at the same time, mathematics reflects some basic laws. If artificial intelligence can help mathematicians deal with some mathematical problems, then people will better understand the simple laws of complex systems, and new breakthroughs in human-computer interaction may be achieved. ”

Author Zhang Yunquan (researcher at the Institute of Computing Technology, Chinese Academy of Sciences)

This year is a bumper year for my country’s supercomputing applications.

In mid-November in the United States At the Global Supercomputing Conference (SC21), the Chinese supercomputing application team won the international high-performance computing award in one fell swoop with its pioneering simulation of quantum circuits ("Ultra-large-scale quantum random circuit real-time simulation") based on a new Sunway system. The highest academic award in the application field - the "Gordon Bell Award"

At the same time, in the SC 21 College Student Supercomputing Competition Finals, the Tsinghua University Supercomputing Team once again won the championship, achieving the fourth place in the SC competition. These achievements in large-scale application software scalability and performance tuning show that my country’s development in parallel software is in the ascendant.

Looking back at the drive of supercomputing to the industry, we must focus on it. We mentioned the term "computing economy". As early as 2018, we proposed the concept of "computing economy" and believed that the computing economy with supercomputing as the core will become a representative indicator to measure the development of a local digital economy and the conversion of new and old driving forces.

Based on the development trends in recent years, we believe that the current development trend of high-performance computing has fully demonstrated that with the integration and innovation of supercomputing, cloud computing, big data, and AI, computing power has Becoming the key to the development of the entire digital information society, the computing power economy has entered the stage of history.

Through a comprehensive analysis of the development status of high-performance computers in China in 2021, it can be concluded that the current high-performance computing is showing the following characteristics. Several characteristics.

First of all, high-performance computing and cloud computing have been deeply integrated. High-performance computing is usually based on MPI, efficient communication, heterogeneous computing and other technologies, and tends to operate exclusively, while cloud computing is It has elastic deployment capabilities and fault tolerance, and supports virtualization, unified resource scheduling and elastic system configuration.

With the development of technology, supercomputing and container cloud are integrating and innovating, and high-performance cloud has become a new product service. AWS, Alibaba Cloud, Tencent, Baidu and the representative of commercial supercomputing "Beilong Super Cloud" have all launched high-performance cloud services and products based on supercomputing and cloud computing technology.

Secondly, super computing. Computing applications have developed from high-precision in the past to broader and broader directions. With the development of supercomputers, especially the continuous decline in use costs, their application fields have also evolved from precision research and development, information security, and petroleum with national strategic significance. The fields of exploration, aerospace and "cold" scientific computing are rapidly expanding to a wider range of main battlefields of the national economy, such as pharmaceuticals, gene sequencing, animation rendering, digital movies, data mining, financial analysis and Internet services. It can be said that they have already penetrated deeply. to all walks of life in the national economy.

Judging from China’s top 100 high-performance computing rankings (HPC TOP100) in recent years, supercomputing systems used to be mainly concentrated in scientific computing, government, energy, electricity, meteorology and other fields. In the past five years, Internet companies The deployed supercomputing systems account for a large proportion, and the main applications are cloud computing, machine learning, artificial intelligence, big data analysis, and short videos. The sharp increase in computing demand in these fields shows that supercomputing is integrating with Internet technology.

Looking at the performance share of Linpack in the HPC TOP100 list, computing power services occupy the first place with 46%; supercomputing centers account for 24%, ranking second; artificial intelligence, cloud computing and short video respectively Followed by 9%, 5% and 4%.

It can be seen that the continued increase in the proportion of artificial intelligence is closely related to the rapid rise of algorithms and applications such as machine learning, as well as the widespread application of deep learning algorithms in big data. Internet companies have rediscovered the value of supercomputers, especially GPU-accelerated heterogeneous supercomputers, through deep learning algorithms, and have invested heavily in building new systems.

Taken together, current computing services, supercomputing centers, artificial intelligence, scientific computing and other fields are the main users of high-performance computing. The Internet, big data, and especially the AI ??field are growing strongly.

Thirdly, the national level has formulated a strategic computing power layout plan. In May this year, the National Development and Reform Commission and four other departments jointly released the "National Integrated Big Data Center Collaborative Innovation System Computing Power Hub Implementation Plan", proposing to build the Computing Power Hub in the Beijing-Tianjin-Hebei, Yangtze River Delta, Guangdong-Hong Kong-Macao Greater Bay Area, Chengdu, Chongqing, and Guizhou, Inner Mongolia, Gansu, and Ningxia have built national hub nodes for the national computing network, launched the "Eastern Data and West Computing" project, and urged to send data from the east to the west for storage and calculation. At the same time, they established computing power nodes in the west to improve the digital infrastructure. A balanced layout effectively optimizes the layout structure of the data center, upgrades computing power, and builds a national computing power network system.

Finally, the computing power demand of artificial intelligence has become the main driving force for the development of computing power. Algorithmic innovations such as machine learning and deep learning, as well as big data collected through the Internet of Things, sensors, smartphones, smart devices, and Internet technologies, as well as super computing power composed of supercomputers, cloud computing, etc., are recognized as the key to the era of artificial intelligence. The "Troika" have jointly launched the latest round of artificial intelligence revolution.

In the context of the booming development of artificial intelligence, virtualized cloud computing has evolved into high-performance container cloud computing, and the integration and innovation of big data, parallel computing, and machine learning have become the latest direction of industry development.

In addition, in terms of intelligent computing evaluation, our country has proposed numerous benchmark testing programs including AIPerf 500, which is a powerful supplement to the traditional Linpack testing standards.

These developments show that the penetration of supercomputing technology into the industry is accelerating. We have entered an era of artificial intelligence that relies on computing power, which is also one of the inevitable trends in future development. As users' demand for computing power continues to grow, the computing power economy will surely occupy an important position in future social development.

Author Wu Yanjun (researcher at the Institute of Software, Chinese Academy of Sciences)

The remarkable development of open source is not just a matter of this year. A lot of important things have happened in the open source field in recent years.

For example, the rapid rise of the RISC-V open source instruction set and its ecosystem. This is the same as the birth of Linux in the early 1990s. At that time, UNIX and Windows were the mainstream, and few people could have predicted that today's operating systems with Linux as the kernel have spread into every aspect of people's lives.

Today, more than 80% of the apps people use every day are running on the Android operating system with Linux as the core. Moreover, there is a high probability that the operating system running on the back-end servers that support their business is also Linux distribution.

Therefore, today's RISC-V may also be underestimated, considered to be immature and difficult to compete with ARM and X86. But maybe in the future, RISC-V, like Linux, will eventually become the mainstream instruction set ecosystem worldwide, with products covering all aspects.

In 2020 alone, the number of members of RISC-V International (RVI, the new name of the RISC-V Foundation after moving to Switzerland) increased by 133%. In fact, RVI's move to Switzerland itself is of great significance. It is a classic case in which the open source field maintains its original intention and does not "choose sides" in the face of competition from major powers. It is worthy of reference by other open source foundations around the world.

In China, at the end of 2019, the openEuler operating system open source community was officially established, led by Huawei and participated by the Institute of Software of the Chinese Academy of Sciences, Kirin Software, and others. In just two years, the community has gathered 7,000 active developers, completed more than 8,000 self-maintained open source software packages, and spawned commercial releases from more than 10 manufacturers.

This is the first true "root community" in the field of basic software in China. Although there is still a gap between Debian and Fedora with a history of more than 20 years, it has taken an important step towards academic research and technology. In terms of R&D and industrial innovation, we finally have a new domestically-led platform that can be accumulated over the long term.

At the same time, after Huawei encountered the overseas supply interruption of its Android operating system GMS (Google Mobile Services), it launched the HarmonyOS operating system and launched the open source project OpenHarmony under the Open Atomic Open Source Foundation.

At present, OpenHarmony has attracted the participation of many domestic manufacturers in a short period of time, which also reflects the strong demand of the domestic industry for a new generation of Internet of Everything operating systems. Although it still lags behind Android in terms of ecological scale and technical completeness, it has taken the first step in building an independent ecosystem.

This is equivalent to delineating a boundary for fair use of source code, that is, fair use is limited to the interface. Once you get down to the implementation code of the interface, you need to comply with the relevant license. This has important reference significance for the legal definition of open source intellectual property rights.

In May this year, the "2021 China Open Source Development Blue Book" was released. It not only systematically sorts out the current status of open source talents, projects, communities, organizations, education, and business in my country and gives development suggestions, but also provides reference for relevant national government management departments to formulate open source policies and lay out open source strategies, and provides a reference for scientific research institutes, science and technology Enterprises and open source practitioners provide more case references and data support.

Whether open source software is developing into an open source software and hardware ecosystem centered on open instruction sets, or open source has strict legal boundary constraints, or leading domestic companies are trying to solve the "stuck neck" problem through open source exploration, And it has achieved certain results... Many cases point to one direction - the open source trend is unstoppable. Because it originates from the human nature of sharing knowledge and collaborative creation, it is also an important model for human civilization to be passed down from generation to generation in the digital age.

Of course, it is undeniable that there are still many problems with open source, such as the security of the open source software supply chain. Security here includes both software quality and security vulnerabilities in the traditional sense, as well as the inability of open source software to be continuously and effectively maintained (for example, OpenSSL has only two part-time maintainers when the HeartBleed problem occurs, and there are only three part-time maintainers when log4j has problems. (for example, GitHub once restricted access to Iranian developers).

With the concentration of open source software on commercial platforms such as GitHub, this problem will become more prominent and even evolve into a major risk. Open source software, an intellectual asset that should belong to all mankind, may become a weapon for implementing "long-arm jurisdiction." In order to avoid this problem, public infrastructure such as open source code hosting platforms and open source software construction and release platforms need to be "decentralized." The world needs multiple open source software infrastructures to minimize the threat of political power to open source communities.

For China, as open source software has become an important supporting part of many major infrastructures such as scientific research and industry, open source software itself must also have an infrastructure with code hosting, compilation, construction, testing, Release, operation and maintenance and other functions ensure the security and continuity of open source software supply, thereby enhancing the confidence of all walks of life in using open source software.

In the future, core technology innovation and open source contribution leadership will become a new driving force for the development of domestic enterprises, or push my country's open source industry to another climax.