Refine
Document Type
- Conference Proceeding (10) (remove)
Is part of the Bibliography
- no (10)
Keywords
- Big Data (1)
- BigPro (1)
- Data Analytics (1)
- Datenmodell (1)
- Design of Experiments (1)
- Digital Supply Chain (1)
- Digitaler Schatten (3)
- Echtzeit (1)
- Feedback data (1)
- Industrie 4.0 (4)
Institute
In today´s turbulent market, the way data are used in production is one of the key aspects to maintain or increase a manufacturing company´s ability to compete. Even though most companies are aware of the advantages of collecting, analyzing and using data, the majority of them do not exploit these fully. Thus, IT systems and sensors are integrated into the shop floor in order to deal with the current challenges, leading to an overwhelming amount of data without contributing to an improvement of production control. Because of developments like digitization and Industry 4.0, there is an innumerable amount of existing research focusing on data analytics, artificial intelligence and pattern recognition. However, research on collaborative platforms in traditional production control still needs improvement. Therefore, the main goal of this paper is to present a platform based closed loop production control and to discuss the relevant data. The collaborative platform represents the basis for a future analysis of high-resolution data using cognitive systems in order for companies to maximize the automation of their production. A use case at the end of the paper shows the potential implementation of the findings in practice.
Industry 4.0 and the consequent necessity of digitalization has also impli-cations to the field of procurement, resulting in the so-called term of Procurement 4.0. Digitalization can be a valuable tool to increase the efficiency of the procurement organization and to exploit new opportunities of growth. A mandatory requirement to perform the digital transformation is an increased transparency along the procurement process chain. This paper aims to conceptualize a digital shadow for the procurement process in manufacturing industry as a basis for advanced data analytics procedures. The term digital shadow stands for a sufficiently accurate, digital image of a compa-ny's processes, information and data. This image is needed to create a real-time eval-uable basis of all relevant data in order to finally derive recommendations for action. The formation of the Digital Shadow is thus a central field of action for Industrie 4.0 and forms the basis for all further activities.
Today, manufacturing companies are facing the influences of a dynamic environment and the continuously increasing planning complexity. Using advanced data analytics methods, processes can be improved by analyzing historical data, detecting patterns and deriving measures to counteract the issues. The basis of such approaches builds a virtual representation of a product – called the digital twin or digital shadow.
Although, applied IT systems provide reliable feedback data of the processes on the shop-floor, they lack on a data structure which represents real-time data series of a product. This paper presents an approach for a data structure for the order processing which overcomes the described issue and provides a virtual representation of a product. Based on the data structure deviations between the production schedule and the real situation on the shop-floor can be identified in real time and measures to reschedule operations can be identified.
Today’s manufacturers are facing numerous challenges such as highly entangled and interconnected supply chains, shortening product lifecycles and growing product complexity. They thus feel the need to adjust and adapt faster on all levels of value creation. Self-optimization as a basic principle appears a promising approach to handle complexity and unforeseen disturbances within supply chains, machines and processes. Therefore it will improve the resilience and competitiveness of manufacturing companies.
This paper gives an introduction to the concept of self-optimizing production systems. After a short historical review, the different levels of value creation from supply chain design and management to manufacturing and assembly are analyzed considering their specific demands and needs for self-optimization. Examples from each of these levels are used to illustrate the concept of self-optimization as well as to outline its potential for flexibility and productivity. This paper closes with an outlook on the current scientific work and promising new fields of action.
Real-time data analytics methods are key elements to overcome the currently rigid planning and improve manufacturing processes by analysing historical data, detecting patterns and deriving measures to counteract the issues.
The key element to improve, assist and optimize the process flow builds a virtual representation of a product on the shop-floor - called the digital twin or digital shadow. Using the collected data requires a high data quality, therefore measures to verify the correctness of the data are needed. Based on the described issues the paper presents a real-time reference architecture for the order processing.
This reference architecture consists of different layers and integrates real-time data from different sources as well as measures to improve the data quality. Based on this reference architecture, deviations between plan data and feedback data can be measured in real-time and countermeasures to reschedule operations can be applied.
Influenced by the high dynamic of the markets and the steadily increasing demand for short delivery times the importance of supply chain optimization is growing. In particular, the order process plays a central role in achieving short delivery times and constantly needs to evaluate the trade-off between high inventory and the risk of stock-outs. However, analyzing different order strategies and the influence of various production parameters is difficult to achieve in industrial practice. Therefore, simulations of supply chains are used in order to improve processes in the whole value chain. The objective of this research is to evaluate two different order strategies (t, q, t, S) in a four-stage supply chain. In order to measure the performance of the supply chain the quantity of the backlog will be considered. A Design of Experiments approach is supposed to enhance the significance of the simulation results.
Influenced by the high dynamic of the markets the optimization of supply chains gains more importance. However, analyzing different procurement strategies and the influence of various production parameters is difficult to achieve in industrial practice. Therefore, simulations of supply chains are used in order to improve the production process. The objective of this research is to evaluate different procurement strategies in a four-stage supply chain. Besides, this research aims to identify main influencing factors on the supply chain’s performance. The performance of the supply chain is measured by means of back orders (backlog). A scenario analysis of different customer demands and a Design of Experiments analysis enhance the significance of the simulation results.
Towards the Generation of Setup Matrices from Route Sheets and Feedback Data with Data Analytics
(2018)
The function or department of production control in manufacturing companies deals with short-term scheduling of orders and the management of deviations during order execution. Depending on the equipment and characteristics of orders, sequence dependent setup times might occur. In these cases for companies that focus on high utilization of their assets due to long phases of ramp up and high energy costs, it might be optimal to choose sequences with minimal setup time times between orders. Identifying such sequences requires detailed and correct information regarding the specific setup times. With increasing product variety and shorter lot sizes, it becomes more difficult and rather time intense to determine these values manually. One approach is to analyse the relevant features of the orders described in the route sheets or recipes to find similarities in materials and required tools. This paper presents a methodology, which supports setup optimized sequencing for sequence dependent setup times through constructing the setup matrix from such route sheets with the use of data analytics.
Manufacturing companies are facing an increasingly turbulent market – a market defined by products growing in complexity and shrinking product life cycles. This leads to a boost in planning complexity accompanied by higher error sensitivity. In practice, IT systems and sensors integrated into the shop floor in the context of Industry 4.0 are used to deal with these challenges. However, while existing research provides solutions in the field of pattern recognition or recommended actions, a combination of the two approaches is neglected. This leads to an overwhelming amount of data without contributing to an improvement of processes. To address this problem, this study presents a new platform-based concept to collect and analyze the high-resolution data with the use of self-learning algorithms. Herby, patterns can be identified and reproduced, allowing an exact prediction of the future system behavior. Artificial intelligence maximizes the automation of the reduction and compensation of disruptive factors.
With big data-technologies on the rise, new fields of application appear in terms of analyzing data to find new relationships for improving process under-standing and stability. Manufacturing companies oftentimes cope with a high number of deviations but struggle to solve them with less effort. The research project BigPro aims to develop a methodology for implementing counter measures to disturbances and deviations derived from big data. This paper proposes a methodology for practitioners to assess predefined counter measures. It consists of a morphology with several criterions that can have a certain characteristic. Those are then combined with a weighting factor to assess the feasibility of the counter measure for prioritization.