Table of Contents
Four years after its publication (February 2017), the Technical Guidelines for the implementation of minimum security measures for Digital Service Providers published by the European Union Agency for Cybersecurity (ENISA) continues to be the reference standard for information security in the European context.
The paper was created to assist Member States and DSPs in providing a common approach to security measures for digital service providers (DSPs), obtained through an effective process of analysis and synthesis of key security practices and methodologies. of existing information: among the main sources of inspiration there is certainly the ISO / IEC 27000 family (Information Security Management Systems Family of Standards), and in particular ISO 27001 (which describes the requirements of management systems for information security) and ISO 27002 (which details the related security controls). The publication of ENISA was in turn taken as a reference model by numerous guidelines promoted by public and independent bodies and organizations, including the Minimum ICT Security Measures for Public Administrations provided by the Agency for Digital Italy (AgID) , which we have had the opportunity to talk about in a dedicated post.
In this article we will deal with the original document, trying to deepen its fundamental points and put them to the test in the light of the numerous changes brought about by digital innovation in recent years: as we will see, it is a text that has not been affected by the passage of years, continuing to be an extremely valid reference point for any IT and digital service provider.
What is ENISA?
The European Union Agency for Cybersecurity (ENISA) is one of the agencies of the European Union; created in 2004 by EU Regulation 460/2004, it has been fully operational since 1 September 2005 and is based in Athens, with a second office in Heraklion, on the island of Crete (Greece).
In a nutshell, ENISA is a center of expertise in network and information security for the EU, its Member States, the private sector and European citizens. ENISA works with these groups to develop advice and recommendations on good practices in information security; it also assists EU Member States in implementing relevant EU legislation and works to improve the resilience of Europe’s critical information infrastructures and networks.
The main purpose of ENISA is to strengthen existing competences in EU member states by supporting the development of cross-border communities committed to improving network and information security across the EU. Needless to say, all of ENISA security guidelines can be applied in any non-UE country (such as the United States), maybe with the help of a good IT Support Team.
For further information on ENISA and its activities, you can consult their official website.
The story that led to the genesis of the Technical Guidelines for the implementation of minimum security measures for Digital Service Providers began in 2009, the year in which Directive 2009/140 / EC was reformed by introducing article 13 bis (security and integrity of electronic communications networks and services) in the Framework Directive 2002/21 / EC. In the first part of the aforementioned article, the legislator requires that suppliers of digital networks and services carry out the appropriate risk analysis activities for information security and implement adequate measures to guarantee the security (paragraph 1) and integrity (paragraph 2) of the services provided, in addition to the obligation to report any security breaches to the competent authorities (paragraph 3).
As a result of the reform, adopted between 2010 and 2011 in all member states, ENISA, the European Commission (EC), Ministries and National Electronic Communications, together with the regulatory authorities (ANR) have begun to work on a common framework with the ‘goal of standardizing the implementation procedure and article 13 bis throughout the European Union. The team reached a consensus on two non-binding technical guidelines for NRAs: the Technical Guidelines on Incident Reporting and the Technical Guidelines on Security Measures: the latter document, the latest version of which (2.0) was published in 2014 , contains the general security requirements of which the contribution that we are presenting in this article, produced later, describes in detail the implementation methods.
Domains and Objectives
The 2017 document takes up the 25 security objectives outlined in the 2014 Technical Guideline on Security Measures, in turn grouped into 7 main domains, and further expands them to 27:
- D1: Governance and risk management
- SO 1: Information security policy
- SO 2: Governance and risk management
- SO 3: Security roles and responsibilities
- SO 4: Security of third party assets (third party management)
- D2: Human resources security
- SO 5: Background checks
- SO 6: Security knowledge and training
- SO 7: Personnel changes
- D3: Security of systems and facilities
- SO 8: Physical and environmental security
- SO 9: Security of supporting utilities (supplies)
- SO 10: Access control to network and information systems
- SO 11: Integrity of network and information systems
- D4: Operations management
- SO 12: Operational procedures
- SO 13: Change management
- SO 14: Asset management
- D5: Incident management
- SO 15: Security incident detection & response (detection capability)
- SO 16: Security incident reporting and communication
- D6: Business continuity management
- SO 17: Business Continuity (strategy and contingency plans)
- SO 18: Disaster recovery capabilities
- D7: Monitoring, auditing and testing
- SO 19: Monitoring and logging policies
- SO 20: Network and system tests
- SO 21: Security assessments
- SO 22: Compliance & monitoring
- SO 23: Security of data at-rest
- SO 24: Interface security
- SO 25: Software security
- SO 26: Interoperability and portability
- SO 27: Customer monitoring and log access
Apart from a few minor differences compared to the points already present in 2014, which have essentially all been maintained, the innovations introduced in 2016 concern the security of hardware interfaces (SO24) and software systems (SO25). Over the past two years, ENISA has found it useful to emphasize the need for DSPs to establish and maintain:
- a policy that allows you to keep the interfaces of the services on which personal data transit safe;
- a policy that ensures that the software is developed in compliance with security best practices (eg OWASP).
It is not difficult to imagine the reasons behind this decision: it is undoubtedly a “squeeze” connected to the exponential increase in cyber threats, with particular regard to data breach attempts that exploit zero-day vulnerabilities in the firmware of peripherals and commercial software (such as operating systems), as well as the numerous attacks based on techniques that exploit programs created in an insecure way (SQL Injection, XSS, brute-force, MITM, etc.).
Reference standards and policies
The security objectives identified by ENISA are derived from the following international standards, commonly used by suppliers in the EU electronic communications sector, including:
- ISO 27001, Information security management systems
- ISO 27002, Code of practice for information security controls
- ISO 24762, Guidelines for information and communications technology disaster recovery services
- ISO 27005, Information security risk management
- ISO 27011, Information security management guidelines for telecommunications
- ISO 22310, Business continuity management systems
- ITU-T X.1056 (01/2009), Security incident management guidelines for telecommunications organizations
- ITU-T Recommendation X.1051 (02/2008), Information security management guidelines for telecommunications organizations based on ISO/IEC 27002
- ITU-T X.800 (1991), Security architecture for Open Systems Interconnection for CCITT applications
- ITU-T X.805 (10/2003), Security architecture for systems providing end-to-end communications
- ISF Standard 2007, The Standard of Good Practice for Information Security
- CobiT, Control Objectives for Information and related Technology
- ITIL Service Support & ITIL Security Management
- PCI DSS 1.2 Data Security Standard
And by the following national regulations and good practices:
- IT Baseline Protection Manual, Germany
- KATAKRI, National security auditing criteria, Finland
- NIST 800 34, Contingency Planning Guide for Federal Information Systems
- NIST 800 61, Computer Security Incident Handling Guide
- FIPS 200, Minimum Security Requirements for Federal Information and Information Systems
- NICC ND 1643, Minimum security standards for interconnecting communication providers
Levels of implementation
For each of the 25 objectives, three possible levels of implementation are listed, which the document calls sophistication level:
- Basic: Basic security measures, by implementing which the minimum required security objective is achieved (with examples of measures in place).
- Industry standard: Industry standard security measures, by implementing which the target is achieved according to market standards (with examples of measures in place and revision tests).
- State of the art: state-of-the-art security measures, including continuous monitoring of implementation, structural review of implementation and tracking of changes, incidents, tests and exercises to proactively improve their implementation (continous improvement): by implementing these measures, the objective is achieved in the best possible way, equivalent to the highest market standards. This level comes with examples of measures in place, evidence of a structural review process, and continuous improvement evidence.
The 4 pillars of IT Security
The 25 ENISA objectives are generally built around four fundamental cornerstones, which in our opinion constitute the main pillars of IT security: authentication, authorization, backup and encryption. In the next paragraphs we will try to provide a comprehensive overview of each of them.
Authentication is the act of confirming the truth of an attribute of a single piece of data or of information claimed to be true by an entity. In the IT field, authentication is defined as the process by which a system (computer, software or user) verifies the correct identity of another computer, software or user, authorizing it to use its associated services: in other words, we can say that authentication is a mechanism that effectively verifies that an individual is who he claims to be.
Authentication makes it possible to uniquely verify the identity of the subject, thus allowing the actions performed within the system to be traced back to him with reasonable certainty and thus facilitating compliance with the accountability principle. The connection between authentication and accountability was also underlined by the Italian Privacy Authority, for which “the sharing of credentials prevents the actions performed in a computer system from being attributed to a specific person in charge, with prejudice also to the owner, deprived of the possibility of checking the ‘work of such relevant technical figures” (provision 4/4/2019).
The term “authorization” means the permission or power to perform, carry out or exercise certain rights. In the IT field, authorization is defined as the process by which a system (computer, software or user) makes it possible to assign different access privileges (also known as “permissions”) to certain users or groups of users. This assignment usually takes place by means of access policies, or groupings of permissions that allow or prohibit the performance of certain activities (reading, writing, deleting, etc.) within certain logical spaces (filesystem folders, drives network, database, functionality, sections of the site, etc.).
In practical terms, the authorization is provided (through automatic mechanisms or with the manual intervention of a system administrator) by means of the definition of a series of Access Control Lists (ACLs), or groupings that contain users (or groups of users) who can benefit from each specific access policy.
In IT security, the term backup indicates a process aimed at securing the information of an IT system by creating redundancy of the information itself through one or more backup copies of the data, to be used as recovery (restoration) of the data in case of accidental (failures) or intentional/malicious (data breaches) events, but also human errors (accidental deletions), system updating or maintenance activities (and consequent need to perform a rollback), etc.
The term “backup” container is used here to simplify a broader concept, summarizing all those techniques aimed at preserving and guaranteeing the availability of data: we therefore mean, in full, also the Disaster Recovery and Business Continuity procedures, obtainable today both with on-premise tools and through cloud services.
Encryption (or Cryptography, from the Greek kryptós, “hidden”, and graphía, “writing”) is the branch of cryptology that deals with methods of concealing the content of a message so that it is not understandable / intelligible to unauthorized read it; the message, once hidden, is commonly called cryptogram and the methods used are called encryption techniques.
In the IT field, the term retains the same meaning, indicating all the techniques and methodologies aimed at transforming sequences of characters through the use of a mathematical algorithm and the application of a secret encryption / decryption key (which is the main “parameter” algorithm): the secrecy of this key represents the security seal of every cryptographic system.
Based on the type of key used, each computer cryptography system can be divided into two macro-categories:
- symmetric encryption, when the same key is used both to protect the message and to make it readable again: or, to put it another way, when the sender and recipient use the same key to both encrypt and decrypt the text;
- asymmetric encryption, when two distinct encryption keys are used: a public key, used to encrypt, and a private key, used to decrypt.
Recently, a third one has been added to these two macro-categories, which, however, is more an implementation method than the previous ones rather than a category in itself: quantum cryptography, which consists in the use of peculiar properties of quantum mechanics during phase of exchange of the keys to avoid that these can be intercepted by an attacker without the two parties in the game noticing it; the first functioning quantum cryptographic network was the DARPA Quantum Network (2002-2007).
Other ENISA publications
The Technical Guidelines presented in this article are just one of the many contributions that ENISA publishes annually in support of companies, organizations and IT professionals. In particular, we recommend that you retrieve the following:
- Personal data processing security manual, a useful handbook dedicated to the security of personal data processing from a GDPR perspective.
- Privacy and Data Protection on Mobile Devices, a report dedicated to Privacy and Data Protection on mobile devices.
Both documents can be downloaded for free in PDF format (the download links are within the respective articles).
We have come to the end of our contribution: we hope that what we have written will be a useful reference point for all IT professionals, information security enthusiasts and anyone interested in deepening their knowledge on the fascinating and complex issues regarding IT security.