Refine
Year of publication
Document Type
- Conference Proceeding (114)
- Part of a Book (62)
- Contribution to a Periodical (27)
- Article (13)
- Lecture (7)
- Working Paper (4)
- Book (2)
- Internet Paper (2)
- Report (2)
Is part of the Bibliography
- no (233)
Keywords
- 3 (1)
- 5G (2)
- 7. EU-Forschungsrahmenprogramm (1)
- AI (2)
- APMS (1)
- APS (1)
- Ablauforganisation (1)
- Abrechnungsmodell (1)
- Additive Fertigung (1)
- Administration (1)
Institute
- FIR e. V. an der RWTH Aachen (233)
- Produktionsmanagement (115)
- Dienstleistungsmanagement (67)
- Informationsmanagement (52)
- Business Transformation (10)
Methods of machine learning (ML) are difficult for manufacturing companies to employ productively. Data science is not their core skill, and acquiring talent is expensive. Automated machine learning (Auto-ML) aims to alleviate this, democratizing machine learning by introducing elements such as low-code or no-code functionalities into its model creation process. Due to the dynamic vendor market of Auto-ML, it is difficult for manufacturing companies to successfully implement this technology. Different solutions as well as constantly changing requirements and functional scopes make a correct software selection difficult. This paper aims to alleviate said challenge by providing a longlist of requirements that companies should pay attention to when selecting a solution for their use case. The paper is part of a larger research effort, in which a structured selection process for Auto-ML solutions in manufacturing companies is designed. The longlist itself is the result of six case studies of different manufacturing companies, following the method of case study research by Eisenhardt. A total of 75 distinct requirements were identified, spanning the entire machine learning and modeling pipeline.
Künstliche Intelligenz (KI) hat als Technologie in den vergangenen Jahren Marktreife erlangt. Es existiert eine Vielzahl benutzerfreundlicher Produkte und Services, welche die Anwendung von KI im Alltag und im Unternehmen vereinfachen. Die Herausforderung, vor denen Anwendende, gerade im betriebswirtschaftlichen Kontext, stehen, ist nicht die technische Machbarkeit einer KI-Applikation, sondern deren organisatorisch und rechtlich zulässige Gestaltung. Zu einer zunehmenden Dynamik in der Gesetzgebung kommt ein gesellschaftliches Interesse an der Kontrolle und Transparenz über die für KI-Modelle erhobenen Daten. Die Diskussion über Datensouveränität im geschäftlichen und privaten Alltag rückt mehr und mehr in das Zentrum der öffentlichen Aufmerksamkeit.
Datenbasierte KI-Anwendungen stehen damit in einem Spannungsfeld zwischen den Potenzialen, die das Erheben und Teilen von Daten über Unternehmensgrenzen hinweg bietet, und der Herausforderung, die Datensouveränität der involvierten Personen zu wahren. Die vorliegende Studie soll erstens über die Auswirkungen der Datensouveränität und die damit verbundenen aktuellen und kommenden Regularien auf KI-Anwendungsfälle aufklären. Dafür wurden Expertinnen und Experten aus den Bereichen Recht, KI- und Organisationsforschung befragt. Zweitens zeigt die Studie Potenziale und Best Practices von KI-Anwendungsfällen mit überbetrieblichem Datenaustausch auf. Dafür wurden Fallstudien in Unternehmen durchgeführt, die bereits erfolgreich Datenaustausch in ihre Geschäftsmodelle integriert haben, um ihre KI-Applikationen zu betreiben und zu verbessern.
Manufacturing companies (MFRs) are increasingly extending their
portfolios with services and data-driven services (DDS) to differentiate themselves from competitors, tap new revenue potential, and gain competitive advantages through digitization and the subsequently generated data. Nonetheless, DDS fail more often than traditional industrial services and products within the first year on the market. Particularly, companies are failing to sell DDS successfully and efficiently with their existing (multi-level) distribution structures. Surprisingly, there is a lack of scientific research addressing this issue. Since there are currently no holistic models for an end-to-end description of distribution-tasks for DDS in the manufacturing industry, this paper contributes to a task-oriented reference model for mapping interactions in the multi-level distribution management. Therefore, a case study research approach is used, to identify and describe the interactions in the multi-level distribution management of DDS, as well as to develop a regulatory framework for MFRs and their multi-level distribution management. This research uses the established theoretical framework of Service-Dominant-Logic to address the co-creation in multi-level distribution management of DDS. As a result, this paper identifies different interaction variants as well as the need for a new management function with 4 main and 14 basic tasks.
Companies are transforming from transactional sales to providing solutions for their customers. Mostly, smart products, enabling companies to enhance their products by providing smart services to their customers, are a key building block in this transformation. However, the development of a smart product requires many digital skills and knowledge, which regular companies do not have. To facilitate the design and conceptualization of smart products, this paper presents a use-case-based information systems architecture prototype for smart products. Furthermore, the paper features the application and evaluation of the architecture on two different smart product projects. The use of such an architecture as a reference in smart product development serves as a huge advantage and accelerator for inexperienced companies, allowing faster entry into this new field of business. [https://link.springer.com/chapter/10.1007/978-3-031-14844-6_16]
Generation of a Data Model For Quotation Costing Of Make To Order Manufacturers From Case Studies
(2022)
For contract or make to order manufacturers, quotation costing is a complex process that is mainly performed based on experience. Due to the high diversity of the product range of these mostly small or medium-sized companies (SMEs) and the poor data situation at the time of quotation preparation, the quality of the calculation is subject to strong variations and uncertainties. The gap between the initial quotation costing and the actual costs to be spent (pre- and post-calculation) is crucial to the existence of SMEs. Digitalization in general can help companies to get a better understanding of processes and to generate data. For improving these processes, an understanding of the important data for that specific process is crucial. Accurate quotation costing for customized products is time-consuming and resource-intensive, as there is a lack of an overview of data to be used within the process. This paper therefore derives a data model for supporting quotation costing in the company, based on literature-based costing procedures and recorded case studies for quotation and calculation. Based on the results, SMEs will have a first overview of the needed data for quotation costing to optimize their calculation process.
Due to shorter product life cycles and the increasing internationalization of competition, companies are confronted with increasing complexity in supply chain management. Event-based systems are used to reduce this complexity and to support employees' decisions. Such event-based systems include tracking & tracing systems on the one hand and supply chain event management on the other. Tracking & tracing systems only have the functions of monitoring and reporting deviations, whereas supply chain event management systems also function as simulation, control, and measurement. The central element connecting these systems is the event. It forms the information basis for mapping and matching the process sequences in the event-based systems. The events received from the supply chain partner form the basis for all downstream steps and must, therefore, contain the correct data. Since the data quality is insufficient in numerous use cases and incorrect data in supply chain event management is not considered in the literature, this paper deals with the description and typification of incorrect event data. Based on a systematic literature review, typical sources of errors in the acquisition and transmission of event data are discussed. The results are then applied to event data so that a typification of incorrect event types is possible. The results help to significantly improve event-based systems for use in practice by preventing incorrect reactions through the detection of incorrect event data.
Companies operate in an increasingly volatile environment where different developments like shorter product lifecycles, the demand for customized products and globalization increase the complexity and interconnectivity in supply chains. Current events like Brexit, the COVID-19 pandemic or the blockade of the Suez canal have caused major disruptions in supply chains. This demonstrates that many companies are insufficiently prepared for disruptions. As disruptions in supply chains are expected to occur even more frequently in the future, the need for sufficient preparation increases. Increasing resilience provides one way of dealing with disruptions. Resilience can be understood as the ability of a system to cope with disruptions and to ensure the competitiveness of a company. In particular, it enables the preparation for unexpected disruptions. The level of resilience is thereby significantly influenced by actions initiated prior to a disruption. Although companies recognize the need to increase their resilience, it is not systematically implemented. One major challenge is the multidimensionality and complexity of the resilience construct. To systematically design resilience an understanding of the components of resilience is required. However, a common understanding of constituent parts of resilience is currently lacking. This paper, therefore, proposes a general framework for structuring resilience by decomposing the multidimensional concept into its individual components. The framework contributes to an understanding of the interrelationships between the individual components and identifies resilience principles as target directions for the design of resilience. It thus sets the basis for a qualitative assessment of resilience and enables the analysis of resilience-building measures in terms of their impact on resilience. Moreover, an approach for applying the framework to different contexts is presented and then used to detail the framework for the context of procurement.
Ziel des Beitrags ist es, aufzuzeigen, wie produzierende Unternehmen entlang der Customer-Journey systematisch kundenbezogene Daten erheben können. Nach einer Einleitung zur Motivation der Themenstellung, einer Begriffserläuterung und einer Vorstellung des Studiendesigns wird ein Referenzprozessmodell der Kundeninteraktionen produzierender Unternehmen gestaltet, darauf aufbauend ein Datenmodell des digitalen Schattens der Kundeninteraktionen abgeleitet und zuletzt ein Vorgehensmodell zur Implementierung des digitalen Schattens der Kundeninteraktionen präsentiert.
Robotic Process Automation (RPA) gewinnt durch die Möglichkeit, repetitive Administrationsprozesse zu automatisieren und Effizienzpotenziale zu heben, zunehmend an Bedeutung. In der Praxis scheitern jedoch viele Implementierungsprojekte. Dies resultiert primär aus dem fehlenden Verständnis darüber, wie sich die Einführung von RPA auf das Gesamtsystem Organisation auswirkt. Es entsteht eine wachsende Kluft zwischen dem Leistungsversprechen von RPA und der Fähigkeit von Unternehmen, jenes auszuschöpfen. Trotz der exponentiellen Geschwindigkeit des technologischen Fortschritts mangelt es vielen Unternehmen an der notwendigen Adaptionsfähigkeit, welche für den nachhaltigen Erfolg einer RPA-Implementierung essenziell ist. In diesem Kontext spielt die Optimierung der im Einklang stehenden Dimensionen Mensch, Technik und Organisation eine zentrale Rolle. Durch eine systematische Literaturrecherche wird aufgezeigt, dass bisherige Ansätze diesen Zusammenhang nur unzureichend betrachten. In der heutigen Forschungslandschaft existiert kein Modell, welches die technischen, sozialen und organisatorischen Komponenten, die im Zuge der RPA-Einführung zu berücksichtigen sind, darlegt. Angelehnt an das soziotechnische Systemdenken und den Prozess der Fallstudienforschung werden theoriegeleitet Dimensionen und Elemente einer RPA-spezifischen soziotechnischen Systemarchitektur identifiziert und erläutert. Das daraus resultierende Modell zur Unterstützung von Unternehmen bei der RPA-Einführung wurde mit einer Vielzahl Industrievertretern im Rahmen des öffentlichen Forschungsprojekts RPAsset des FIR e. V. an der RWTH Aachen validiert.
In short-term production management of the Internet of Production (IoP) the vision of a Production Control Center is pursued, in which interlinked decision-support applications contribute to increasing decision-making quality and speed. The applications developed focus in particular on use cases near the shop floor with an emphasis on the key topics of production planning and control, production system configuration, and quality control loops.
Within the Predictive Quality application, predictive models are used to derive insights from production data and subsequently improve the process- and product-related quality as well as enable automated Root Cause Analysis. The Parameter Prediction application uses invertible neural networks to predict process parameters that can be used to produce components with desired quality properties. The application Production Scheduling investigates the feasibility of applying reinforcement learning to common scheduling tasks in production and compares the performance of trained reinforcement learning agents to traditional methods. In the two applications Deviation Detection and Process Analyzer, the potentials of process mining in the context of production management are investigated. While the Deviation Detection application is designed to identify and mitigate performance and compliance deviations in production systems, the Process Analyzer concept enables the semi-automated detection of weaknesses in business and production processes utilizing event logs.
With regard to the overall vision of the IoP, the developed applications contribute significantly to the intended interdisciplinary of production and information technology. For example, application-specific digital shadows are drafted based on the ongoing research work, and the applications are prototypically embedded in the IoP.