Modern enterprises rely more and more on data assets for their operations and for the provisioning of information-intensive services.
On one hand, digitalization has furthered the reliance on these assets with respect to both digitization (i.e. creation of digital twins of physical objects) and formulation of new value propositions.
On the other hand, data has generally become more available and big data and AI technologies allow for further exploitations, often creating an increasing dependency on data assets.
While data processing is likely to occur within a single organization, it is not often the case that a single organization automatically has the capabilities for collecting all the potentially useful data. And even so, the problems of authorizing the data collection as well as their successive utilization still remain open. These problems are very contemporary as they are centered on the concern of privacy and, more generally, on the rights recognition for the roles of data creators and data owners. However, the rights recognition is only the tip of the iceberg. While legal implications are evident for unlawful data use – regardless of the feasibility in proving misuses – the creation of data-based competitive advantages have caused competitiveness concerns, reinforced by the recent introduction of the EC data strategy document .
Consequently, for the organization’s current and future operations, and more generally for the creation and sustainment of an information-oriented competitive advantage, a key enabler becomes the ability of ensuring that the data is collected, processed, and stored, and more generally used according to an agreed data policy. As also identified in the EC’s data strategy, the key underlying issue is trust, which is founded on the certainty that each party understands and acts accordingly to expected directives. In technical terms, trust can be translated into agreeing on an unambiguous data policy and ensuring the compliance of the enterprise architecture with the same policy.
As a direct consequence, data itself (through associated data policies) becomes a driver for the EA governance, specifically guiding the EA evolution and ensuring its compliance with legal and ethical obligations, and more generally contributing to the alignment of data needs to business needs by raising trust among key stakeholders that can fulfill the data needs.
A Model-based Data Policy Methodology for EA governance
The Data Policy Methodology was introduced in order to provide a model-based mean to specify data policy requirements and to integrate these requirements in model-based EA activities . The methodology consists of:
- Data policy concept identification
- Data policy digitalization
- Definition and integration of a new viewpoint (data policy) with existing EA frameworks (detailed in )
- Definition of supporting processes for the architecture governance, specifically on architecture compliance and evolution
Data Policy – Concept Identification
A data policy can be defined as “an agreed set of rules that regulates the production, use and dissemination of data” . As such, a data policy must accurately define the details of the actors involved in the respective scenarios, the data types, the purpose, the modalities with which the purpose can be achieved, and the data provisioning modalities, for example. Using the de Bono’s six thinking hats, the concept of data policy can be accurately defined by identifying the set of questions that a data policy must answer. This can be obtained by associating the possible concerns while wearing each hat, as listed in the figure below.
Data policy digitalization
Once the concept of data policy has been identified, the next step was to digitalize this concept, i.e. providing a digitized representation that could also support EA governance. In particular, two models were defined:
- One conceptual model: to validate the concept definition and to ensure consistency (internal and external) of data policy. Specifically, this model has been built by identifying the possible ranges of answers that could be provided for the above questions. As usual, the definition of the conceptual model has been an iterative activity that considered input from various domains.
- One Logical model, derived from the conceptual one: to provide a digitalized form for representing data policies so that they could be more easily integrated with EA frameworks.
Supporting Processes for Architecture Governance
In the initial release, the processes focused on the key governance concerns identified above:
- Architecture compliance
- Architecture evolution.
These processes are defined in standard BPMN and they can be implemented with minimal knowledge, specifically: the above identified data policy concepts, BPMN palettes, and the concepts commonly defined in EA modelling languages (e.g. Archimate) or EA Frameworks (e.g. DoDAF, MoDAF, ESA-AF , etc.).
In particular, for the architecture compliance, the process guides the enterprise architect to the exploration of the EA model to confirm that the above data policy questions are “answered” by the EA as specified in the respective and relevant data policies. The availability of this process is also an important element for the organizational quality certification, as this process provides internal auditors with the means to ensure the satisfaction of EA quality requirements on data use.
Differently, the architecture evolution process guides the enterprise architect on how to adapt the EA so that the EA remains compliant with the data policies. The architecture evolution is per-se broader than this, and therefore this process needs to be complemented by other standard processes (e.g. those defined by COBIT) for the whole architectural evolution due to evolving business goals and data needs.
How to Use the Data Policy Methodology for Architecture Governance
The methodology can be applied along similar lines of its definition, specifically:
- Eliciting (or identifying) the data policy for all the data items received by the enterprise
- Answering the questions identified in Figure 1
- Coding the answers in the logical model (data policy model)
- Building the EA data policy viewpoint and its views by visually organizing the data policy model
- Establishing traceability between the EA elements in the data policy viewpoint and in other viewpoints (e.g. strategic, operational, etc.)
- Implementing the architecture compliance and evolution processes (as needed).
Details of the modelling structure (data policy viewpoint, data policies) as well as architecture governance processes are available in . However, the following figure provides a conceptual view on how the traceability between the data policy viewpoint and standard viewpoints support the architecture governance processes related to the data policy. The example conventionally uses the most relevant and common viewpoints in architecture frameworks such as UPDM, DoDAF, or MODAF. However, the methodology can be readily extended to other frameworks, such as Zachmann, and other viewpoints. N.B. The processes go beyond the hints displayed in this diagram. The processes also rely on the ontological categorization of the individual EA elements (characterization often captured in the properties of individual elements, e.g. location, type of operation, communication protocol, etc.).
- EC, Data Strategy, available from https://ec.europa.eu/digital-single-market/en/policies/building-european-data-economy
- Gianni, D. (2015). Data Policy Definition and Verification for System of Systems Governance. In Modeling and Simulation Support for System of Systems Engineering Applications (eds L.B. Rainey and A. Tolk). doi:1002/9781118501757.ch5
- Gianni, D, et al., “SSA-DPM: A Model-based Methodology for the Definition and Verification of European Space Situational Awareness Data Policy”, Pro-ceedings of the 1st European Space Surveillance Conference, June 2011.
- Gianni, D, et al. “Introducing the European Space Agency architectural framework for space-based systems of systems engineering.” In Complex Systems Design & Management, pp. 335-346. Springer Berlin Heidelberg, 2011.
Daniele Gianni is a business educated versatile computer engineer who works at the intersection of IT and management, introducing novel IT tools and model-based design methods to solve new problems in various domains, such as space, banking, biomedical engineering. Gianni has worked for prestigious institutions in Europe and US, initially in research and more recently in IT management roles. Gianni holds a MS and PhD in Computer Engineering from the University of Rome TorVergata (Italy) and a MBA from Frankfurt School of Finance and Management (Germany). Currently, he is business systems analyst for an EU authority. https://www.linkedin.com/in/danielegianni/