Protecting Smart Buildings from Cyber Threats

Complex cyber threats to smart buildings range from locking doors and changing building temperature, to pumping gas into specific spaces to harm the occupants

Author: Tomer Nuri, IOTSI SCCISP, tomernuri@gmail.com

In 1998, the sci-fi thriller “Dream House” was the first movie that addressed potential threats by smart home/building infrastructures. Since then, it has become almost impossible to watch a movie describing a hostile takeover of a high-class building without encountering at least one scene dealing with the disruption of the building’s infrastructures and management systems – and for a good reason. Building automation and management systems made considerable progress in recent years, and now provide a significant base for intrusion by threats capable of affecting the operational infrastructures and therefore effect the physical dimensions (by metaphysical threats) and the serviceability and survivability of the ICT systems throughout the organization.

The domain of cyber security for Building Automation Systems (BAS) and Building Management Systems (BMS) belongs to the complex category of security for critical infrastructures, along with security for SCADA/ICS systems in the manufacturing and operations world and security for IoT systems

Unlike the SCADA and IoT categories, where security efforts are being invested and the importance of security is a matter of consensus (even though the accomplishment of this objective is by no means a trivial matter), in the smart building category, the issue of security is still a matter of considerable ambiguity. For example, if an organization is preparing to build a new building or to relocate to a new campus and various contractors have already been selected, like the framework contractor, the structural work contractor, the finishing contractor and so forth – unless one of the contractors is a cyber-technology contractor, these preparations will be an example of poor planning that could result in vulnerable operational systems in that building that would necessitate a more substantial investment of resources later on.

The building management and automation systems constitute a critical backbone that links together and manages all of the systems that are essential to the uninterrupted function of the building, from climate controls through lighting controls, ventilation, fire alarm and extinguishing systems, elevator and parking controls to physical and logical access controls.

In recent years, BAS/BMS manufacturers have begun to adopt standard protocols as BACNET and MODBUS for linking the various systems to the management backbone. The transition from dedicated protocols to standard protocols offers numerous advantages, especially with regard to more efficient integration and synergy, but also constitutes a source for various threats and vulnerabilities.

Control System Disruption

An attack against any one of these subsystems or a synchronized attack against multiple systems could lead to a complex cyber event that has the potential of adversely affecting the experience of visitors in the building being attacked, and in some cases even constituting an actual threat to the people in it. For example, a sharp rise in temperature and a disruption of the climate control system along with the activation of the public address system and alarm sirens might lead to a situation that will deteriorate to the point where staying in the building or in specific parts of it would become impossible. A much more severe scenario might evolve in the case of a meeting room whose electronically-controlled doors are locked by hacking into the physical access control systems, and at the same time – gas-based fire-extinguishing systems are activated to pump gas into that room. This will create a combined situation that poses an immediate danger to everyone in that room.

Generally, the threats to smart building systems may be classified into several categories: disruption/interruption of the normal function of controllers and operational systems; disruption of telemetry and control data for the purpose of displaying false control indications; illegitimate transmission of commands to various controllers at a different frequency, while using an illegitimate source (a “foreign” element connected to the building network in a stationary or mobile form); illegitimate transmission of commands to various controllers at a different frequency while posing as a legitimate source.

As stated, the ultimate objective of the attack is not always the critical operational systems. In some cases, these systems are exploited by the attacker as an interface with other critical IT systems (as in the case of the hacking into the network of the Target chain, which started with a supplier accessing the climate control systems).

The challenge in security for smart building systems stems from the fact that in most cases, such systems involve a substantial physical area containing decentralized infrastructures that provide hostile parties with convenient access options. Additionally, for various reasons, it is not always possible to completely isolate the operational communication infrastructures of the building from the other general infrastructures, especially in the case of building clusters.

Effective Security

Here are some examples of security measures that may prove effective against threats of the types described above.

Monitoring of traffic and signaling – monitoring the data traffic and the signals exchanged between the building management system and every last controller can contribute to the detection of irregularities and threats to the critical infrastructures of the building.

Analytical security – a small number of analytical security manufacturers offer support for monitoring and analyzing protocols and data traffic in the field of smart buildings. These systems can identify evasive threats (stealth malware), including repetitive transmission of commands to various controllers.

Segmentation & network encryption – micro-segmentation and L2 encryption technologies, incorporated and automatically activated in the communication infrastructure of the building, can minimize “migration” of threats between systems and establish an additional layer of security with no user involvement.

Physical Security Operations Centers (SOC) – more and more organizations that have understood the severity of the risk in question are incorporating cybersecurity planning in the construction phase and are even establishing SOCs that focus on the detection of threats to the operational systems. For this purpose, automatic blocking and response management systems may be combined with SIEM systems that feature built-in support for analysis of events and metadata from BAC/BMS systems, as well as correlation capabilities.

Reducing the vulnerability space – rigidizing of critical subsystems and methodical enforcement of operational processes will reduce the vulnerability space of every smart building.

Providing a security solution begins, as always, with an understanding and recognition of the threat and advance planning, followed by the deployment of an effective line of defense. In the field of smart buildings, the most important thing is to incorporate cyber-oriented thinking as early as during the design stages, if possible.

For more information and a practical guide for a reference security architecture visit IOTSI web site and access all required documents.

***

 

 

As IoT Security Concerns Rise, Are Solutions Keeping Up?

Many objects nowadays can be turned into internet-connected devices, and any one of them can make its way into the workplace. In fact, Gartner expects more than 65 percent of enterprises will deploy Internet of Things (IoT) products by 2020.

While employees may enjoy the benefits offered by IoT technologies, chief information security officers (CISOs) and other security decision-makers have a different view of these devices. IoT security, particularly the risk of personal data exposure, is quickly becoming one of their top priorities.

Some IoT Security Concern Is Based on Personal Experience

Not surprisingly, as the number of IoT devices in the workplace increases, so do the security threats associated with them. Over the next couple of years, we should expect that more than a quarter of cyberattacks will directly involve the IoT, Gartner warns.

With this in mind, researchers with Tripwire polled attendees at this year’s Black Hat USA to gauge their concerns about IoT security. Sixty percent of participants said they were more worried about IoT security in 2018 than they were last year, and even those who weren’t more or less concerned still reported feeling worried about the security of IoT devices.

Some of this concern comes from personal experience: About 20 percent of respondents said they personally encountered an IoT-related attack at work or on their home network. But perhaps the more alarming statistic is that 14 percent said their IoT devices may have suffered an attack, but they didn’t know for sure.

As Craig Young, a computer security researcher with Tripwire’s Vulnerability and Exposures Research Team, points out, too many security professionals lack the basic tools, security systems and knowledge to determine if their devices have been compromised, and that could lead to serious trouble down the road.

The Business Value of IoT Solutions

Eliminating IoT from the enterprise is not an option. For many organizations, IoT solutions add significant business value. As Consumer Goods Technology reported, “One of the most game-changing aspects of smart, connected products is how they allow product companies to create new consumer needs and establish new user habits. These new smart connected products rely on new habits, on trying to predict what will tick and what will be a hit with today’s consumers.”

Based on a 2017 Forrester report, Network World reports that the IoT improves business value in three ways:

  1. Improved product functions through design.
  2. Better business operations with digital automation.
  3. Enhanced consumer services.

However, all this IoT technology also creates a larger attack landscape for threat actors that organizations aren’t prepared for. As the aforementioned Gartner report states, “IoT security is often beyond the average IT leader’s skill set, as it involves managing physical devices and objects rather than virtual assets.” Security of IoT devices, the report continues, is often a barrier to the IoT’s overall effectiveness, which, in turn, hurts its business value.

IoT Data Is a Nightmare for the GDPR and Other Privacy Laws

The IoT also generates massive amounts of data, and this sets up another security issue. According to the Tripwire survey, the top issue surrounding IoT security is protection of personal data, followed by botnets and network compromise.

Because of how IoT devices collect data, it is more difficult to ensure data privacy for consumers, especially under the European Union’s General Data Protection Regulation (GDPR) and other new privacy laws. “The aggregation and correlation of data from various sources make it increasingly possible to link supposedly anonymous information to specific individuals and to infer characteristics and information about them,” wrote Cameron F. Kerry for Brookings.

Data generated from a smart city’s web of cameras and meters, for example, is nearly impossible to protect under privacy regulations. How do you alert thousands of otherwise anonymous people that their personal information is being gathered and stored? The onus falls on the security departments of the smart city to ensure the IoT devices they are using are secure, as are all aspects of data collation and storage. At the same time, as we’ve seen, security experts are still trying to figure out the best way to approach the IoT’s flaws and vulnerabilities.

Embrace Time-Tested Techniques to Secure the IoT

There are solutions on the horizon. The 2018 Global PKI Trends Study from the Ponemon Institute and Thales found that the IoT is “the fastest-growing trend in the deployment of applications that use public key infrastructure (PKI).”

“For safe, secure IoT deployments, organizations need to embrace time-tested security techniques, like PKI, to ensure the integrity and security of their IoT systems,” said John Grimm, senior director of security strategy at Thales eSecurity.

IoT security jumped in importance for many security professionals this year because IoT use has increased within many organizations. Now, our tools and solutions need to catch up.

SOURCE       Security Intelligence

AUTHOR     Sue Poremba

Understanding what is ‘personal’ in an IoT context

Submitted by IoT Security Institute

privacy-control

Author Nicole Stephensen

In our introductory podcast in the Privacy Matters series, we talked broadly about what privacy is… and set the stage for further discussion about the opportunities privacy best practice can present in an Internet of Things (IoT) eco-system.

Privacy is about the protection of personal information in accordance with the law.

It’s not surprising, then, that the term “personal information” has a specific meaning in privacy law. Although definitions do vary slightly across jurisdictions, the term is generally understood to mean information about an identified individual, information that identifies an individual or could reasonably lead to the identification of an individualAn individual is a person. And only a natural person (that is, someone who is alive) can have personal information.

Depending on which country you’re standing in, there is some potential for confusion when dealing with privacy language – particularly if you are somewhere that uses the term

“personal data” in place of what others would call personal information. For instance, in its General Data Protection Regulation, the European Union talks about personal data – which is defined as any information relating to an identified or identifiable natural person.

When talking about data, some professionals take the view that data are random, meaningless facts, figures, numbers or shapes unless they are processed, structured, organised, interpreted or presented in some way as to make them meaningful – and that it is the meaningful presentation of data that then makes information.

This is quite possibly a subject warranting a session of its very own, and opinions on the matter are surely divided. But, as a privacy professional, and in the context of what privacy laws worldwide are intended to do, I would simply offer that we must treat any perceived difference between personal information and personal data as semantic.

Personal information is about the nature of the information, not how it’s captured. It is technology neutral – it can exist across media and in a variety of formats. It can be captured and held in myriad ways – including on paper, on film, on a computer or other device, or in the cloud. It may be easy to understand just from looking at it, or it may be a more complex series of numbers or letters.

How do we, though, apply what we know? We have a definition for personal information, but what does it actually mean? What does personal information look like? And just how much information are we talking about here?

Personal information is vast.

It can include what I think of as “the basics”:

  •  Contact details, like name, address, telephone number;
  •  Demographic details, like birth date, age, sex, marital status, nationality or citizenship; and
  •  Education, financial, criminal or employment history.

But, for the uninitiated, it can also include information that might not immediately come to mind, including:

  •  Unique identifiers, such as an ID or license number, finger print, iris scan or DNA profile;
  •  Location details on a given day or time;
  •  Recorded image (such as a photo or video);
  •  A voice recording;
  •  An opinion or point of view;
  •  Online browsing history or patterns; and
  •  Other data that is generated or captured by a network or device (such as payment history, number of steps travelled, or the end-point of a taxi trip).

Sometimes you may already know who a person is, and various additional elements of their personal information can be combined to form a profile. Perhaps it’s a financial risk profile, a health (or fitness) profile or something linked to whatever it is you’re selling, like a profile for the “person most likely to drive an electric sports car”.

In your role within the IoT eco-system, if you wonder whether something is personal information, ask yourself the following: Is this information about a living person? Is the information about a person whose identity is known to you? If not, is the person identified by this information? Orcould this information reasonably lead to them being identified (whether the information is taken on its own or in combination with other information)?

 

It’s a fallacy to think that personal information is simply a means to a marketing end. It is, in fact, used across sectors to attend to any number of business functions.

In an IoT direct-to-consumer context, such as enabling smart homes or selling smart cars, I would expect personal information to be collected and used to:

  •  Entice or sign up a customer base;
  •  Personalise customer experiences with a product;
  •  Maintain accounts and contact with customers over time;
  •  Identify opportunities to up-sell;
  •  Gauge customer satisfaction; and
  •  Address problems, faults or questions relating to a product.

Now if IoT technologies are deployed in a wider context, such as in a smart or sustainable city environment (where vendors, industry champions and cities work together), I would expect personal information to be central to:

  •  Providing, on a large scale, beneficial technologies to citizens (such as free urban wi- fi, smart water meters or a public transport route planning app).

Additionally, such information would be involved in:

  •  Identifying, understanding and perhaps even influencing economic, social or environmental outcomes in the community (for example, improving bushfire evacuation timeframes   through use of environmental sensors linked to early warning applications on mobile phones or smart TVs);
  •  Implementing technical or physical security safeguards in vulnerable city locations (such as strategically placed monitoring devices in public buildings);
  •  Tracking citizen behaviour across an area of interest (such as which demographic regularly uses bike-share services in the inner city);
  •  Understanding consumption patterns in the community – whether in relation to resources (like water), infrastructure (like roads) or essential services;
  •  Identifying opportunities to on-sell a product directly to citizens (such as smart garbage bins in areas of low recycling uptake); and, of course,
  •  Engaging in direct marketing so that vendors can attract the community to additional products or services.

In a cities-centred IoT environment, citizens generally give over their personal information in exchange for a service, benefit or perceived benefit. They may be able to opt-in, such as by choosing to download a cool app that lets them know when their library selections are available for pick-up. In other cases they have little choice, such as when a city rolls out a new way of delivering an old service… like compliance officers now wearing body-worn cameras when attending a complaint (as opposed to using the old pen-and-paper approach).

Personal information can be both what a person gives to you in order to participate in your initiative or use your product and what you (as in, your network, application or relevant IoT device) learn or generate about a person. Consider, for example, the extent of personal information a home assistant device for the elderly may acquire over time… and just what will, or could, happen to that information.

It bears reminding that the ability to collect personal information – and to store it, do things with it, share it with or sell it to others – imposes an obligation on vendors, data storage providers, industry champions, public policy makers, municipalities, data brokers and others with involvement in the supply and deployment of IoT devices. In addition to the requirement to protect personal information in accordance with the law, there is a responsibility, a duty, to handle it in a way that is fair and within the reasonable expectation of the person to whom it relates.

Failing to embrace this duty – failing to promote privacy accountability within your part of the IoT eco-system – will undermine community trust and, likely, the success of whatever your initiative is.

nics

 

Nicole Stephensen is the Executive Director for Privacy and Data Protection at the Internet of Things Security Institute.

What is Privacy?

Submitted by IoT Security Institute

E8294688

Author Nicole Stephensen

It’s easy to assume that we are all on the same page about what privacy is. But, as information technologies, machine learning, social media, cloud services and the availability of data driven gadgetry increasingly bridge the privacy profession with cyber security, data analytics, risk management and others, it’s important to unpack the concept a little:

Privacy is considered a basic right or freedom that a person should be able to enjoy – for example, freedom from unnecessary intrusion into one’s personal life. It can be talked about in terms of a person’s home or the physical territory they occupy… their body… their communications… their information. One challenge with defining privacy – particularly in today’s connected and social world – is that everyone has a different point of view (or threshold) in relation to what they consider to be private. To many, privacy has morphed. It’s no longer really about the right to be left alone. Increasingly, it seems to be about the ability to exercise choice or control.

The world over, our understanding of privacy has been made more concrete by setting it out in law. Admittedly, privacy laws vary. Some are more robust than others, some are local, others… more global. Some focus heavily on principles, others have more teeth in terms of enforcement. But all allow the extent of what it means to “have privacy” to be understood by the broader community and by those with whom the community deals:  the public service, businesses, industry, academia and the like.

If we were to capture privacy in a sentence, something we can all digest…  privacy is about the protection of personal information in accordance with the law.

In our increasingly data-driven world, personal information is “money”. It’s a tangible asset. It’s essential for a for a host of business functions:

  • • selling a product
  • • delivering a service
  • • providing access to benefits
  • • identifying and solving problems
  • • informing and making decisions
  • • creating public policy, and so forth.

Personal information is both valuable and necessary for those in the business of building tech, and those in the business of deploying that tech. I’m talking about vendors and their sales teams, data analytics companies, municipalities exploring their opportunities in the growing “smart cities” arena, social media, government and more.

The ability to have such information – to collect it, store it, manipulate it, send it on to someone else – also imposes a responsibility. There is an implicit obligation to ensure that personal information is handled in a way that is fair and within the reasonable expectation of the person giving it.

In relation to the IoT sphere specifically… privacy presents a wonderful opportunity. Earlier I said that privacy is about protecting personal information in accordance with the law. It absolutely is. However, privacy is not just a compliance exercise. It’s not intended to be a roadblock to innovation or a drag on progress.

It’s a way of thinking. It’s about considering – or, rather, critically thinking about – what will happen to personal information at all stages of its lifecycle within a particular context. Maybe the context is narrow and limited in its application, like phasing out manual employee timesheets in favour of a smartphone app that handles the same functions … or maybe the context is broader, like deploying facial recognition in shopping mall touch screen directories nationwide.

Embedding a Privacy by Design approach in the overall deployment framework for IoT technologies means that personal information is rightly elevated beyond being associated only with its endpoint – like, when it has made money or provided insight into whatever your marketplace is. It again becomes associated with the person who provided it in the first place and with their right that it be protected. Privacy will sit prominently with information security, where risks and vulnerabilities will be identified and addressed at the outset, and at critical junctures thereafter. It will no longer be an afterthought. And it certainly will not be addressed at the last minute, or tacked on after the fact, in a desperate attempt to recover public confidence after a data breach, system failure or some other crisis.

If the necessary protections for personal information are well understood and managed in an IoT context, and steps are taken to ensure the community

  • · knows about it,
  • · has confidence that their information is secure and being appropriately used,
  • · has the ability to ask questions (and receive open, accountable answers)…

… then trust in a business, a brand, a product, an innovative way of doing things is much more likely to flourish.

Over the coming weeks and months, my colleagues and I at the Internet of Things Security Institute will be exploring privacy in greater depth. We will talk in terms of basics (such as defining what personal information actually means in an IoT context) and also explore specific privacy issues that can have significant impact on the successful deployment of IoT technologies. We will discuss the powerful relationship between privacy and security in organisational and operational contexts, and we will explore the concept of Privacy by Design and how it fits logically within an overarching security framework.

nics

 

Nicole Stephensen is the Executive Director for Privacy and Data Protection at the Internet of Things Security Institute.

 

Hackers don’t give a toss about policy

Written by IoT Security Institute

Quite an attention seeking title I would think. Apart from the obvious read further carrot I think there is a place for commentary on this subject. I am sure many of my industry colleagues will disagree, but have we been lured into a false sense of security relying on policies and accreditation certificates? Now let me say upfront only a fool would suggest that regulatory standards and accreditation are pointless and without purpose. The question here is have we relied on them to solve all our problems? Have organisations moved away from security tools, intelligence gathering and actionable evidence in preference for documentation controls?

I guess we need to go back to when it all started. For those that remember there was a time when the security administrator was also the security manager, security architect and risk manager. Business and technology had not yet collided leaving an amalgam of confusion and opportunity. It was all about providing a simple service and most importantly keeping it up. Adding a few security controls came in handy but formalised processes and procedures were adhoc at best.

Enter the BIG Four. Traditionally rooted in financial services this crew saw an opportunity to clean up the IT Wild West. Bringing policies, procedures and accountability to the IT world introduced a new standard of risk management, project delivery and operational services. This was a good thing. Organisations had something to work with.Something to measure against. The gates opened and soon we had a breadth of industry standards and methodologies. The accountants had brought regulation and accountability to the technicians. Security was now as much about certification as it was about security testing and intelligence gathering.

With the emergence of smart technologies, data analysis and the increasing number of ransomware attacks and zero day exploits the reliance on documented security was starting to show some cracks in the armour?

The growth in government and private sector investment in cyber operation centres is not a coincidence. We are putting troops back on the ground. The emergence of cyber hunting and cyber threat intelligence services is an indication we need more actionable evidence. A search of online job sites will quickly highlight the increased number of postings for penetration testers,ethical hackers and cyber threat analysts. The shift appears to be toward proactive services. The need to find out what may happen before it happens. A proactive cyber response plan in preference to a post attack reactive posture.

As stated in the introduction Hackers don’t give a toss about policy. They understand technology and they understand human behaviour. Additionally, with an increase in attack vectors, the convergence of Information and Operational technologies, hackers do not assess their victims on whether they are currently certified, industry compliant and appropriately governed; it is far simpler than that.Hackers will most likely seek flaws and weaknesses in defence controls; flaws that can hide a zero day attack until it is ready to launch.  Some may say the age of the accountant mindset is over and the age of the engineer has begun. It is evident a need for engineering like precision is required to address emerging cyber attacks.

To further make the point. Recent attacks on large organisations have shown that although well certified and compliant many were soft targets to sophisticated attacks. They effectively had “no eyes” when it came to mapping the attack vector. Progressive enterprises and certainly government insiders are aware we need more on the ground to address this wave of cyber disruption.Of course, there will be challenges. Selling the need for cyber threat intelligence services to a broader management base that by and large are more comfortable with policy and standards documentation than cyber attack analysis data will require some work.

As I mentioned earlier only a fool would disregard the place and importance of accredited certification, policies and standards. However, in a smart world these controls on their own simply will not do. To rely on IT policy to stop the cyber criminal is like relying on a law journal to stop crime in our streets. We need the frameworks, the standards and policies to map out the landscape, highlight the blocks and pieces that require control and consideration. This information provides insight into what needs to be defended and why. It should not be considered as an alternative to cyber security controls and threat intelligence services.

After all, when toe to toe, I think most would prefer a cyber warrior equipped with an abundance of cyber bullets than a kit full of instruction manuals.

Author

Alan Mihalic CISSP ISSAP ISSMP CISM

Principal Cyber Security Advisor ,Writer, Keynote Speaker, President IoTSI

She’ll be right. Not for long. New data breach notification laws set to shakeup Australian businesses.

Written by

sbrm

Australian businesses who have to date been able to self manage their indiscretions and security breaches will soon be legally obliged to disclose data breaches due to a new bill passed by the Federal Government. After many failed attempts, numerous governments, the bill has finally been passed in the Senate.

In a nutshell companies will need to disclose if their systems have been compromised due to technical shortcomings or cyber attack.

Given the recent number of data breaches, and subsequent disclosure of considerable private information many believe the legislation is long over due. So much so the bill has the support from both sides of parliament. It is a clear message that regulation is required to keep the interests of the community at the forefront of business practices. It is a reaffirmation that an individual has the right to privacy and whoever collects, processes and stores that information has a responsibility to protect that information in accordance with community and now legal expectations.

 WHO WILL BE AFFECTED?

The bill applies to organisations that have responsibilities under the privacy act.

  • Australian Government agencies
  • Businesses and not-for-profit organisations with an annual turnover of more than $3 million.

The Privacy Act also applies to some types of businesses with an annual turnover of $3 million or less. These businesses include:

  • Private sector health services providers (even alternative medicine practices, gyms and weight loss clinics fall under this category)
  • Child care centres, private schools and private tertiary educational institutions.
  • Businesses that sell or purchase personal information along with credit reporting bodies

The bill stipulates disclosure is required when a breach is qualified as an “eligible data breach”. Defined by the belief an individual is at “risk of serious harm” due to the disclosure of their personal information. For more information on “risk of serious harm” the Australian Law Reform Commission website provides additional information.

Some have argued an organisation’s ability to internally evaluate on what constitutes “risk of serious harm” is providing an opportunity for some organisations to get around the bill’s mandatory disclosure requirements based on an interpretation of serious harm. Certainly, this opportunity may exist however organisations need to tread carefully as blatant disregard and avoidance will be identified and organisations will be held accountable.

NOTIFICATIONS AND PENALTIES

Where an organisation has identified a breach (within 30 days) they are required to notify the Privacy Commissioner and affected customers.  As detailed in the bill, failure to comply with the new notification scheme will be “deemed to be an interference with the privacy of an individual” and there will be consequences:

“A civil penalty for serious or repeated interferences with the privacy of an individual will only be issued by the Federal Court or Federal Circuit Court of Australia following an application by the [Privacy] Commissioner. Serious or repeated interferences with the privacy of an individual attract a maximum penalty of $360,000 for individuals and $1,800,000 for bodies corporate.”

CLAIMS OF NOTIFICATION FATIGUE AND HEAVY HANDEDNESS

Critics and opponents of the bill claim organisations will be overwhelmed with reporting requirements resulting in businesses notification fatigue. The alleged “unreasonable compliance burden” would see organisations having difficulty understanding their obligations and compliance responsibilities.

Some may argue these critics have an inability to grasp what is required in this digital age. The game has changed, the risks have intensified and it’s all there online for the taking.The stakes have never been higher; our nation’s assets never more under threat and an individual’s right to have certain aspects of their lives remain private not simply for financial or legal concerns, but ever-increasing health concerns just does not seem to resonant with some.For others it is a simple question, do the benefits merit the additional effort? To use the country’s current default standard “the pub test” it certainly appears so.

There is not doubt the Bill will introduce additional workloads and complexity for some organisations but this is simply the cost of doing business in the digital age. The upside of the online economy far outweighs any additional reporting requirements. Moreover, recent Australian data breaches (2016) highlights the lack of compliance reporting and regulatory control has caused its own share of ‘notification fatigue” albeit for different reasons. If you are not convinced consider the following statistics courtesy of

CIO Australia   21 September, 2016

“More data breaches have been reported in Australia than anywhere else in the APAC region so far this year, according to a security index. The Gemalto Breach Level Index recorded 22 incidents in Australia in the first half of the year, far more than the 13 recorded in India and seven in Japan and New Zealand. The APAC region accounted for 8 per cent of incidents worldwide, compared with 79 per cent that targeted North America The most severe incident in Australia so far this year was Menulog, which suffered from a breach of 1.1 million records leaving customer names, addresses, order histories and phone numbers exposed.”

These statistics may suggest,certainly to critics of the Bill, that Australia’s appalling APAC ranking is due to our willingness to disclose, more so perhaps than other nations; proving any legislation is unnecessary. Skeptics would suggest it is a data breach iceberg, you need to worry more about what you do not see, than what you do.

Australia is out there, way out there and it is starting to become a major concern to the Australian Government, Australian security professionals and savvy business leaders.

Emerging IT trends will only heightened these concerns and the Internet of Things will blow the door off current ideas of what constitutes a target; introducing levels of breach and disclosure that for many still seems unimaginable. Given your TV or refrigerator is capable of making your personal information publicly available relying on company self regulation and reporting exclusively is somewhat underestimating the emerging risk and associated impacts.

Gemalto Breach Level Index

Many cyber security experts assert if this is what we know in a non mandatory breach disclosure Australian context. What don’t we know? How much private data is out there waiting to be discovered?

There have been some suggestions the new Bill is too heavy handed and that voluntary disclosure is a better option. Much as this sounds an incredibly plausible option can it really hold up? Admittedly, many organisations would do the right thing and disclose their failures, however many, given the right circumstances maybe inclined to “damage control” the situation and hope it can be internally managed. This may suit an organisation’s overall best interests but does it represent the best interest of the community or the individual? The ramifications of best or self interest simply do not scale well in an online world. At some point someone has to pay; the customer, the organisation, the community, and often more than originally anticipated.

 TAKE IT AS A POSITIVE

Organisations should view the introduction of this new legislation as a positive.An opportunity to align people, processes and technology to ensure better compliance and more effective security controls to combat cyber attacks and emerging threats. This legislation will provide clarity of purpose for those involved in ensuring compliance , and assist senior executives with identifying where their security dollars should be spent and for what desired outcomes.

 

Data is your Organisation’s Core Business: Are you Prepared to Govern it?

Written by

640x218

Annelies Moens, Managing Director of Privcore, discusses why data and its governance is important to most, if not all, organisations and is something that leaders and managers need to embrace. 

The below extract is based on an article and presentation prepared for the Australian Institute of Company Directors’ Australian Governance Summit1st to 2nd March, 2018, Melbourne, Australia and published in theJournal of Data Protection & Privacy, Vol. 2.1, 2018. pp. 16-21 by Henry Stewart Publications, London United Kingdom

Data is an asset or a liability depending on how it is managed and in this sense, every organisation (business, government and not-for-profit) is a data business. As an asset or a liability, data is a core topic with which leaders and managers must make themselves comfortable and familiar.

A lot of data in organisations is about people, their lives, what they do, where they go, what they buy, what they like, what they say, what they look for, what they do for entertainment and so on — it is personal information and thus in many instances is subject to privacy and data protection requirements. Data is so integral to organisations, that it must be treated as core business. Data protection and privacy also have the added dimension of being considered a human right as recognised in the UN Declaration of Human Rights, the International Covenant of Civil and Political Rights.

Four key themes affecting the governance of data

1) Trust and social licence

The 2018 Edelman Trust Barometer reveals that trust is in crisis around the world. In 20 of the 28 economies surveyed, business, government, NGOs and media are generally not trusted. Yet for innovation to flourish, trust is vital; and innovation depends increasingly on the use and sharing of data.

In Australia, the Office of the Australian Information Commissioner’s Community Attitudes to Privacy Survey 2017 shows that ‘one in six [citizens] (16%) would avoid dealing with a government agency because of privacy concerns, whilst six in ten (58%) would avoid dealing with a private company’. Leaders and managers need to think about how their organisations communicate with stakeholders. How do they build and shape expectations with customers? It is certainly not shaped by the terms and conditions of products and services.

2) Mass customisation

The term ‘mass customisation’ refers to our present-day era where we have taken the handmade bespoke aspects of the pre-industrial era and the mass production capability of the industrial revolution era to be able to produce customised items at scale.

In our mass customisation era there is a need for customer centricity, where we need to understand our customers at an individual level in order to provide for their bespoke needs. Yet at the same time, ensuring an organisation has a 360-degree view of a customer is NOT a customer-centric approach, as customers may not want to fully reveal themselves to organisations. Customers may want to be able to choose what they share.

Privacy is all about giving the customer control of what happens with their data — making them the driver and the reason for our products and services. As such, customer service and managing failure, including data breaches, are becoming increasingly crucial touchpoints in determining the level of engagement and goodwill customers have towards organisations.

3) Increasing number of data breaches

Being able to manage failure is increasingly important as more and more organisations are subjected to data breaches owing to either their own inadequate security practices, system/human failures or unfortunate external attacks against which they cannot fully protect themselves.

The more data that leaves controlled and protected environments, the more we are polluting our data ecosystem. Identity fraud increases, trust diminishes (both ways between customers and organisations) and billions of dollars are wasted. Indeed, an Australian expert on data breaches testified before the US Congress on the impact of such breaches on identity verification, and outlined that static knowledge-based authentication is becoming increasingly risky in a post-breach data world. Focus on cybersecurity to ensure organisations have control of the data for which they are custodians is becoming increasingly crucial.

4) Technology

Technology is rapidly dictating our policies as legislatures and policy makers struggle to keep up. We are in a world where it is easier to keep data than delete it and it is easier to create systems that retain data. An increasing amount of data will be collected about people as more devices become connected to the Internet of Things, which saturates our lives.

We have new technologies that are affecting massively the handling of customer data; consider:

  • Automated driverless cars and the collection of masses of data from sensors, voice and behaviour.
  • Automated algorithmic decision making and artificial intelligence affecting our day-to-day lives.
  • Social credit scoring.
  • Biometrics and facial recognition in private and public spaces.
  • Digital identity management.
  • Cloud services through which data storage and processing is outsourced.

While none of these technologies are inherently bad, they can rapidly lead to massive increased individual risk, through over-collection of data, data breaches and misuse, or out of context use. These issues can be minimised with appropriate governance, which will be needed in order to retain customer trust. We need to build core human values and ethics into our products and services. We must keep individuals at the centre and build technology that respects human values, including privacy and security.

Five ways leaders and managers can build trust

1) Develop a culture of respect

The importance of culture cannot be underestimated. In an independent review of the Accident Compensation Corporation (ACC) in New Zealand following a data breach that occurred in 2012, culture was the biggest transformational issue. ACC had had inconsistent practices around respecting customer data, which led to numerous incidents of inappropriate data handling. Today, New Zealand government agencies have privacy maturity assessment frameworks in place and a chief privacy officer who operates across the whole government, so that confidence and trust in New Zealand government can grow.

2) Make privacy part of risk management frameworks

According to the World Economic Forum’s 2018 Global Risk Report, alongside extreme weather events and natural disasters, cyberattacks and data fraud/ theft are the top three and four likely risks to occur. As such, privacy needs to be part of risk management and assurance processes.

3) Make leadership accountable

What gets measured gets done. If no person at senior executive level or board level is responsible for the decisions their organisation makes with respect to what happens to data, the direction the organisation takes will likely be dictated by factors other than core values, such as respect for customer data.

4) Monitor key indicators such as input from customers, suppliers and employees

Listen not just to senior executives, but also to customers, suppliers and a broad set of employees. Consider how fast bad news travels to leadership and whether privacy is a regular board agenda item. How are failures and complaints managed within the organisation?

5) Collaborate with the regulator

Regulators with collaborative approaches tend to have more successful regulated outcomes (plus most complaints are negotiated settlements). The New Zealand Privacy Commissioner, as an example, is taking an innovative regulatory approach by introducing a Privacy Trustmark, whereby it is willing to indicate services or products that take data protection seriously and give customers confidence their personal information will be respected and protected.

Summary

It is incumbent on leaders and managers to know what goes on in their organisation in terms of the handling of data; only then can they steer their organisation to adopt and develop innovations that respect one of their most valuable assets. Failure to do so is likely to lead to customer dissatisfaction and loss, regulatory intervention, fines, shareholder and customer litigation and class actions, and decline in share value and profits.

Original Source: Legalwise Seminars

Annelies Moens, CIPT, FAICD, CMgr FIML is a widely recognised global privacy expert and thought leader, trusted by business executives, government and privacy professionals with close to 20 years’ experience.

She is Managing Director of Privcore and cofounder of the International Association of Privacy Professionals in Australia and 

Annelies MoensNew Zealand (iappANZ). She held elected roles during her six year Board term of iappANZ, including as President. She has held several senior leadership roles, including as Deputy Managing Director of a privacy consultancy, External Relations Manager at an online legal publisher, Group Manager and Chief Privacy Officer at a copyright licensing agency, and Deputy Director at the Australian privacy regulator.

She has an MBA in general international management (distinction) from the Vlerick Business School in Belgium, is a qualified lawyer, has undergraduate degrees in computer science and law (first class honours) from The University of Queensland, Australia. Contact Annelies at operations@privcore.com