Data application practice of Chinese enterprises u

2022-08-22
  • Detail

It has been more than two years since Germany formally proposed the "industry 4.0" strategy in April, 2013. "Industry 4.0" has received a very wide range of attention and discussion in China, and many excellent enterprises with Mo content of 0.5% to 5% have begun to rub their hands, They began to formulate and implement their own "industry 4.0 strategy". These scenes in full swing highlight the attractive prospects of intelligent manufacturing of innovative products in the future. Coincidentally, Ge recently announced the sale of its non industrial assets, including GE Capital, to focus on high-end innovative manufacturing. All this seems to prove that a new industrial revolution with innovation and digital driven intelligent manufacturing as the core is waving to us frequently

industry 4.0 is inseparable from data analysis and application

one of the goals of industry 4.0 is to realize the construction and upgrading from smart factory to intelligent production. The former focuses on the realization of intelligent production systems and processes and networked distributed production facilities, while the latter involves the production logistics management, human-computer interaction, 3D printing and the application of advanced manufacturing technology in the industrial production process of the whole enterprise. A general consensus is that to realize this new "industrial revolution", data mining and knowledge discovery capabilities based on "big data" technology will be one of the most important capabilities. In general, the manufacturing industry stores far more data than other industrial sectors. According to incomplete statistics, the new product data since 2010 has reached nearly 2 exabytes (216), including a large number of instrument measurement data, supply chain data and even product life cycle data. The analysis and value mining of these data is a necessary work to provide intelligent guidance for innovation and production practice, which will directly affect the degree of intelligent production. In almost all literature summaries of industry 4.0, the importance of data analysis and knowledge discovery is emphasized

common problems in the field of manufacturing data collection and analysis

industry 4.0 is the production of high-precision and excellent quality of multi variety and small batch innovative products. In order to achieve this "production of high-precision and excellent quality", accurate and timely collection and analysis of production data is undoubtedly indispensable. In order to achieve product innovation and high-precision quality control, top companies such as apple and Intel spend a staggering amount of money on data analysis software every year, and in the training and evaluation system of their employees, data analysis ability is also regarded as an important consideration dimension. Worldwide, the investment of various industrial giants in the field of data acquisition and analysis is still increasing, and the vast majority of domestic enterprises are also setting off an upsurge of data acquisition, analysis and utilization. As a professional supplier of quality big data and intelligent manufacturing solutions in the industry, quoins conducted in-depth research on dozens of enterprises in 2014 and found that many enterprises have begun to increase their investment in the field of data collection and analysis. However, due to their late start, there are still some common and important problems in this field

unclear goals lead to greed for perfection

some experts once called for "big data" to start with "small applications", that is, select one or two specific fields to start the application of data collection and analysis technology, and gradually expand to related fields with the accumulation of time and experience. However, some enterprises didn't choose such "small applications" at the beginning, but spread them out in as many fields as possible. As a result, all departments were exhausted and collected data from supply chain, production, quality, financial cost, ERP, after-sales service and other aspects to form a huge data group, but it was not clear how to use these data to solve specific problems, It just formed a "data heap" with relatively limited value. For example, an enterprise in Jiangsu spent nearly two years focusing on the data of five databases in this case, and each team spent a lot of energy and cost for this, but finally because the application goal was not clear, the project of hydroxides: aluminum hydroxide, magnesium hydroxide and basic magnesium carbonate did not see the benefits for a long time

roger, a senior expert of quoins quality big data research institute, said: since most domestic manufacturing enterprises do not have much experience in data analysis and application, sometimes they rush to launch some projects in order to catch up with the "fashion" of "big data" when the "small applications" are not clear, so the risk of failure is great. In fact, for the manufacturing industry, refinement and quality are the most important topics that will never change. Enterprises can start with improving production quality and process capacity and reducing unit quality cost, collect and integrate product quality data and process data of 'producing these qualities' and analyze and utilize them. This is often a path that can be more effective and recognized by customers

manual data collection is inefficient

quoins found in the survey that although many enterprises have a certain awareness of data accumulation, the process of data collection is still semi self driving or even manual. In the measurement workshop of a precision parts manufacturing enterprise in Zhejiang Province, after completing the measurement, the staff needs to manually record the measurement results in a pre printed form with a pen, and then another staff member inputs them into the computer; For some intelligent measuring instruments, such as CMM, the data files automatically output by the measuring instruments are still stored in the form of separate files scattered in each measuring computer, which requires manual copy and conversion to achieve data normalization, and it takes more than half a day to make a simple quality inspection report. This method is not only inefficient, resulting in a great loss of human costs, but also the recording of data is very error prone. Tommy, as the supervisor of the measurement workshop, has a deep understanding of this: sometimes he spent several hours looking for the reason for the abnormal measurement value, and finally found that it was due to the wrong decimal position in the manual record. "Generally speaking, with the help of some automated data collection methods, enterprises can save at least 80% of the time spent on data collection, and can significantly reduce the probability of error.". Tommy, senior manager of quoins data acquisition program, explained

there is only result data, but no process data

in order to carry out quality monitoring or produce quality reports, many enterprises collect and sort out the data of various quality indicators that customers are directly concerned about, such as the concentricity, size, angle, etc. of a specific product. From the perspective of quality big data application, this approach is feasible if enterprises only want to produce quality reports or realize real-time quality monitoring (for example, the implementation of SPC requires real-time data collection). However, the more value of quality big data lies in finding the space and clues for quality improvement from the data, so as to formulate a practical continuous improvement plan, and even predict the quality situation in the future. This is similar to the Six Sigma quality management, which advocates "speaking with data" to reduce process fluctuations. Without the data of spray angle, pressure and pH value of liquid medicine in PCB production engineering, it is difficult to get which factor is the main cause of the bad linewidth of a batch

data normalization is not application-oriented

data normalization is essential before exploring and analyzing data in normal state. Data normalization is different from simple data acquisition and integration. For example, we get the required data from different databases (including processing temperature data), but it is likely that we cannot analyze it. The reason is that in different problem research situations, sometimes we may need to treat the temperature as a continuous variable, and sometimes we need to use it as a discrete variable, This needs to be determined according to the characteristics of the industry and process and the problems that need to be analyzed and solved, but in the same industry or the same enterprise, this treatment is relatively fixed. Therefore, it is better to automatically complete similar data normalization in the process of data acquisition, so that the obtained data can be directly analyzed

at a loss when analyzing data

fortunately, at present, some enterprises have been able to carry out certain analysis work around the theme of quality control, improvement and prediction, especially in some high-tech semiconductor industries. But when engineers carry out these analysis work, in addition to the problem of data normalization mentioned above, there are often more puzzles: for the same data analysis problem, there are often more than one basic method from the primary to the advanced method, and engineers often need to spend a long time to explore a set of more effective analysis methods and ideas for a specific problem

this is because the application of industrial statistical methods is quite professional, which requires not only certain basic statistical knowledge, but also considerable engineering background and data analysis experience; On the other hand, currently commonly used data analysis software generally only provides various data analysis methods, and it is very flexible to use. However, it fails to give sufficient suggestions on the method to be adopted for specific problems, which makes the vast majority of engineers who do not have sufficient industrial statistical background feel at a loss. Just like a room has many doors, it is difficult for us to know which door is the best way to the room we want to go. However, from an objective point of view, the flexibility of these tool software itself is very commendable, but there is still a certain gap between the actual level of data analysis and application of most domestic enterprises and this flexibility. "The industrial solutions we focus on are officially designed to help domestic enterprises shorten their learning time," Roger said

it is true that the above problems need long-term improvement from many aspects, such as knowledge accumulation, system construction, talent training and so on. Under the background of the "new normal" of China's economy and the bright vision of industry 4.0, we have more reasons to take solid steps. After all, the "world is so big", but there are not many opportunities we can seize

Copyright © 2011 JIN SHI