The Privacy Post #18

Georgia Wright Privacy

Every week we’ll be rounding up the latest news in privacy and other interesting articles.
UK data protection laws to be overhauled

The UK government has released details about the Data Protection Bill. The Bill will bring the EU’s General Data Protection Regulation into UK domestic laws, as well as a few more protections. Find our full analysis here.

“It will give people more control over their data, require more consent for its use and prepare Britain for Brexit,”
– Matt Hancock, Digital Minister.

 

 

Hotspot Shield VPN accused of violating users’ privacy

A privacy group has recommended that the Federal Trade Commission investigate a popular virtual private network (VPN) provider. AnchorFree Inc. has been accused by The Center of Democracy and Technology of misleading users of their VPN service with claims that they are ensuring their online privacy. According to a code analysis of Hotspot Shield’s source code, the VPN provider tracks users and doesn’t adequately secure their personal data.

Disney Faces Children’s Privacy Class Claims Over Mobile App

A total of 43 Disney apps have been accused of collecting data from mobile applications in violation of the Children’s Online Privacy Protection Act. The Walt Disney Co. is now engaged in a federal lawsuit with a concerned mother for tracking software contained in the apps which “exfiltrate[s] that information off the smart device for advertising and other commercial purposes.”

Windows 10 privacy: You’re happy for us to collect your data, says Microsoft

Microsoft has claimed that it has had “positive” feedback following the release of Windows 10’s personalisation options. These options include privacy-enhancing features such as an online privacy dashboard allowing users to control activity data across several Microsoft services. Other changes included more controls for location, speech recognition, diagnostics, relevant ads and recommendations. It is also offers more transparency concerning Microsoft’s use and collection of data.

Like this post? Subscribe to our weekly newsletter here to be updated with all news privacy, Cognitiv+ and more.

New Data Protection Bill: post-Brexit plans & the GDPR

Georgia Wright Privacy

UK government announces new data protection legislation

Preparations for the EU General Data Protection Regulation (GDPR) have been well underway ahead of its May 2018 enforcement date. News this week concerning the government’s intent to legislate new data protection laws means that these preparations will stay relevant. However, it has become apparent that the new British laws will go above and beyond the GDPR in protecting privacy.

Why do we need new laws?

“It will give people more control over their data, require more consent for its use and prepare Britain for Brexit,”
– Matt Hancock, Digital Minister.

The government’s statement of intent released on 7 August 2017, was hinted at during the Queen’s speech. In an opening statement, Matt Hancock iterates the main purposes of the legislation:

  • Support innovation by ensuring safe data processing standards
  • Ensure data protection in a system with more accountability but less bureaucracy
  • Tougher rules on consent, rights to access, move and delete data
  • Enhanced enforcement
  • New powers for the Information Commissioner’s Office (ICO)
  • Bring domestic law in like with EU law
  • Ensure data protection within criminal investigations and law enforcement action

As the Financial Times has pointed out, the most recent UK data protection legislation (the Data Protection Act 1998) was written before the time of Facebook, Instagram and Uber. With the UK eager to retain its position as a leader in the G20 internet economy, legislation addressing technological advances is essential.

How will the UK Data Protection Bill differ from the GDPR?

The Bill will enforce the GDPR, but also adds extra stipulations and offences. Some new offences will be penalised with an unlimited maximum fine. These include:

  • “altering records with intent to prevent disclosure following a subject access request” – applicable to all data controllers or processors;
  • “intentionally or recklessly re-identifying individuals from anonymised or pseudonymised data” – also applicable to those knowingly handling or processing such data.

Why is the UK Data Protection Bill different from the GDPR?

The statement of intent repeatedly reinforces the strength of the UK’s digital economy and reputation for strong data protection standards. The legislation is part of the government’s wider digital strategy to aid the growth and development of the UK’s digital economy. This will be achieved by ensuring that businesses and the government can use data in “innovative and effective” ways:

  • implementing strong data infrastructure,
  • having a high level of regulatory compliance,
  • developing a data-literate workforce,
  • advancing data skills.

The Bill’s ultimate goal will be the implementation of three interconnected objectives: maintaining trust, future trade and security.

“Our data economy will be integral to the UK’s growth and future prosperity.
Analysis predicts that data will benefit the UK economy by up to £241 billion between 2015 and 2020…
Our vision is to make the UK the safest place to live and do business online.”

Like this post? Subscribe to our weekly newsletter here to be updated with all news privacy, Cognitiv+ and more.

The Privacy Post #17

Georgia Wright Privacy

Every week we’ll be rounding up the latest news in privacy and other interesting articles.
UK Home Secretary supports back doors while claiming ‘real people’ don’t need end-to-end encryption

In the campaign for access to encrypted messages, UK Home Secretary Amber Rudd has claimed that “real people often prefer ease of use and a multitude of features to perfect, unbreakable security.” She has further claimed that the end-to-end encryption technique is causing problems by preventing anti-terrorist efforts. This rhetoric has been highly critiqued.

Apple files patent for screen with privacy-viewing options

A new patent application by Apple entitled “Displays With Adjustable Angles-of-View” is aimed at increasing the privacy of tablet and phone users. By utilising a electrically adjustable lens array to modify backlight illumination, Apple hopes to restrict the display’s angle of view. Read the application here.

EU privacy watchdog: Privacy shield should be temporary

The UK-US data transfer deal will be discussed later this month in Washington. Privacy watchdogs and experts from the European Commission will assess whether Privacy Shield is doing enough to sufficiently protect EU citizens in light of controversial US surveillance laws and intelligence-sharing agreements.

“In my view it’s an interim instrument for the short term. Something more robust needs to be conceived,”
– Giovanni Buttarelli, European Data Protection Supervisor

Update on FCC Privacy Rules

The Federal Communications Commission new order of 26 June 2017 means that the Code of Federal Regulations will be updated to cater for the reinstatement of pre-2016 Privacy Order rules.

Like this post? Subscribe to our weekly newsletter here to be updated with all news privacy, Cognitiv+ and more.

Big data analytics: the 9 data protection problems concerning the ICO

Georgia Wright Articles

Earlier this year, the Information Commissioner’s Office (ICO) described the combination of big data, artificial intelligence and machine learning as ‘big data analytics’. The ICO also marked this trio as distinct from traditional data processing. But what distinctions are concerning the ICO?

The report from March 2017 ‘Big data, artificial intelligence, machine learning and data protection’ sets out guidance and recommendations to address the privacy concerns of this type of data processing. There are apparently nine data protection principles threatened by the use of artificial intelligence to process big data.

1. Big data analytics must be fair

The requirement for data processing to be fair means that the effects on data subjects must be limited and unobtrusive. For example, in 2015 a female doctor wasn’t allowed entry to a female gym changing room as the automatic security system had assumed that she was male (being called ‘Dr’). It’s this sort of processing which unfairly discriminates against data subjects.

“assessing fairness also involves looking at the effects of the processing on individuals, and their expectations as to how their data will be used”

The complex nature of processing might also impact the transparency requirement of the Data Protection Act (DPA) – a fair processing notice is recommended. Making consumers aware of how and when their data might be collected for processing will help build trust between businesses and consumers.

2. Permission to process

Big data inherently comes paired with consent issues – the General Data Protection Regulation (GDPR) will require “unambiguous” consent. The consent must be given in a “clear affirmative action”. This is no small feat when taking into consideration how many subjects might be involved and how the complex processing might be explained to them.

Graduated consent could  provide an innovative solution here. Allowing subjects to opt-in when the data is collected throughout the relationship between service provider and consumer overcomes the binary nature of ‘opt-in or not at all’ forms. At the exact point when an app wants to share information with a third party, the user can be given a ‘just in time’ notification to gain their consent. This targeted consent will likely be better informed too.

3. Purpose limitation

Data protection principles require that any further processing (which is not directly consented to) must not be incompatible with the original purpose. Big data analytics often leads to the finding of unexpected correlations – this may in turn lead to data being used for new purposes.

4. Holding onto data

The concept of data minimisation underpins data protection legislation. However, when artificial intelligence is applied to data, the scope of analysis is usually much greater – why analyse a sample of a data set when you could easily analyse it all?

“in a study of businesses in the UK, France and Germany, 72% said they had gathered data they did not subsequently use”

5. Accuracy

When data sets are large, incorrect data might be passed over or dismissed. Secondly, big data might not represent a general population – all of the data set doesn’t mean that certain groups might have been originally excluded or underrepresented. Finally, hidden biases might be applied from big data analysis results. Applying results on individuals in order to profile them might lead to inaccurate assumptions about them.

6. Individual rights: data accessibility

The benefits of big data pose data protection implications yet again – its volume, variety and complexity. The DPA requires that individuals should be allowed to access their personal data. However, there is one positive outcome noted here: if organisations make the move to big data, they might undertake the process of bringing together disparate data stores. This could make it easier to locate data on an individual in the event of a subject access request.

7. Security measures and risks

Whilst positioning big data analytics as a useful tool for analysing security risks, the ICO contrasts this by highlighting its drawbacks. Large data sets and the nature of big data processing can throw up specific information security threats.

8. Accountability

The GDPR contains several additional provisions promoting accountability. The context of big data processing (in that it can be experimental, without defined hypothesis or business need) might cause problems when complying with these provisions. For example, organisations of over 250 employees must maintain records of data processing activities. Additionally, erroneous algorithmic decisions based on biased profiling throw up accountability issues.

9. Controllers and processors

If artificial intelligence is analysing the data – who is processing it? This question isn’t as rhetorical as it appears to be – the issue lies in establishing if a third party provider of artificial intelligence is the processor or controller.

“in a forthcoming article on the transfer of data from the Royal Free London NHS Foundation Trust to Google DeepMind, Julia Powles argues that, despite assertions to the contrary, DeepMind is actually a joint data controller as opposed to a data processor”

If the analytics provider has the power to decide what data to collect and how to apply analytical techniques on behalf of another, it is likely to be the data controller as well as the processor.

Like this post? Subscribe to our weekly newsletter here to be updated with all news privacy, Cognitiv+ and more.

The Privacy Post #16

Georgia Wright Privacy

Every week we’ll be rounding up the latest news in privacy and other interesting articles.
Clouds linger over troubled transatlantic data-transfer deal

The transatlantic privacy deal, Privacy Shield, has been called out as invalid by the Human Rights Watch. In a letter to the European Commission, the rights group has stated that US surveillance laws are an infringement on the privacy rights of European citizens. These laws falls short of European privacy standards, as well as US intelligence sharing arrangements. Read the full letter here.

“There’s no way to get around the fact that US laws and policies allow abusive monitoring and need to be drastically overhauled before they can meet human rights standards,”

– Maria McFarland Sánchez-Moreno, co-director of the US Program at Human Rights Watch.

Trump voting commission wins right to collect state voter data

The Presidential Advisory Commission on Election Integrity has been allowed to collect data from states concerning their registered voters’ full names, political affiliation, criminal records, addresses, date of birth and whether they have voted in past elections. The Electronic Privacy Information Center had challenged this move in a lawsuit. President Trump has started the commission to allegedly seek out voter fraud.

Privacy Crackdown: Russia Bans VPNs And Apple ‘Sides With Censorship’ In China

Virtual Private Networks (VPNs) have been banned in Russia starting from 1 November 2017. VPNs have allowed citizens to access banned websites by allowing users access to a secure server. The connection between the user and the VPN’s server is encrypted and allows private browsing.
Apple has also removed VPN apps from its Chinese app store, in a move which has seen Apple accused of siding with censorship.

PNR: EU Court rules that draft EU/Canada air passenger data deal is unacceptable

The Court of Justice of the European Union has confirmed that the EU/Canada deal on collection and sharing of air travellers’ data breaches European law.

Like this post? Subscribe to our weekly newsletter here to be updated with all news privacy, Cognitiv+ and more.

UK parliament calls for information on AI

Georgia Wright Articles

The UK government has invited contributions to its investigation into the economic, ethical and social implications of advance in artificial intelligence. The inquiry plans to analyse the risks and benefits of artificial intelligence’s effects on society whilst also searching for pragmatic solutions. The deadline for the submission of written evidence will be 6 September 2016.

“The Committee wants to use this inquiry to understand what opportunities exist for society in the development and use of artificial intelligence, as well as what risks there might be.”

– Lord Clement-Jones, Chairman of the Select Committee on Artificial Intelligence.

The inquiry comes after the the White House’s report on the future of artificial intelligence in October 2016. The US report reinforces the need for a regulatory environment which is safe yet stimulating and makes recommendations for effective retraining of workers. This had pushed MPs to call for a new UK artificial intelligence commission. Read more about the history of UK & US’ policies for innovation in artificial intelligence here.

The House of Lords Select Committee on Artificial Intelligence will consider:

  • What is the current state of artificial intelligence? How is it likely to develop over the next 5, 10 and 20 years?
  • Is the current level of excitement surrounding artificial intelligence warranted?
  • How can the general public best be prepared for more widespread use of artificial intelligence?
  • Who in society is gaining the most from the development and use of artificial intelligence? Who is gaining the least?
  • Should the public’s understanding of, and engagement with, artificial intelligence be improved?
  • What are the key industry sectors that stand to benefit from the development and use of artificial intelligence?
  • How can the data-based monopolies of some large corporations, and the ‘winner-takes-all’ economics associated with them, be addressed?
  • What are the ethical implications of the development and use of artificial intelligence?
  • In what situations is a relative lack of transparency in artificial intelligence systems (so-called ‘black boxing’) acceptable?
  • What role should the Government take in the development and use of artificial intelligence in the UK?
  • Should artificial intelligence be regulated?
  • What lessons can be learnt from other countries or international organisations in their policy approach to artificial intelligence?

Find out how to submit information here.

Like this post? Subscribe to our weekly newsletter here to be updated with all news privacy, Cognitiv+ and more.

The Privacy Post #15

Georgia Wright Privacy

Every week we’ll be rounding up the latest news in privacy and other interesting articles.
French court refers Google privacy case to ECJ

France’s Conseil d’État, the country’s highest administrative court, has asked European judges to consider whether search engines should apply the “right to be forgotten” globally. This means that search providers such as Google or Bing could have to exclude search results from non-European domains, for example Google.com. The case was initiated by Google following a  €100,000 fine from France’s privacy watchdog, Commission Nationale de l’Informatique et des Libertés. The penalty was imposed for not applying the “right to be forgotten” across its global domains. The European Courts of Justice are expected to make a ruling in 2019.

“We look forward to making our case at the European Court of Justice,”
– Peter Fleischer, Google’s global privacy counsel.

Silicon Valley mostly quiet in internet surveillance debate in Congress

Major technology firms have surprisingly been absent from the debate surrounding the Foreign Intelligence Surveillance Act (FISA) in Washington. Facebook, Google’s Alphabet and Apple have not expressed opinions on the renewal of the controversial US internet surveillance law, known as Section 702. It allows the US National Security Agency to collect and analyse the digital communications of overseas foreigners, although it is thought that data on Americans has also been collected without warrant. The tech giants are supposedly quiet due to Section 702’s link to the ongoing debate about Privacy Shield in Europe. Read more about Privacy Shield here.

Privacy pros in the know aren’t waiting for Brexit – they’re preparing for GDPR

A study prepared by the International Association of Privacy Professionals (IAPP) has surveyed privacy professionals to reveal their plans ahead of Brexit. In spite of the fact that the UK may have to implement its own data protection laws following Brexit, privacy professionals’ primary focus is to demonstrate compliance with the General Data Protection Regulation (GDPR) from the outset. The survey has additionally found that:

  • 94% of those surveyed are preparing to comply the GDPR
  • 66% are implementing a new internal privacy accountability framework.
  • 58% are investing in privacy training for employees.
FBI warns parents of privacy risks associated with internet-connected toys

The FBI has issued a warning on its website, advising parents of the risks of smart toys. These toys employ new technologies in order to learn and adapt their behaviour according to user interactions. Internet-connected toys may have privacy and safety risks if they have capabilities like microphones cameras, GPS, data storage and speech recognition.

Like this post? Subscribe to our weekly newsletter here to be updated with all news privacy, Cognitiv+ and more.

Cognitiv+ partners with law firm CMS!

Georgia Wright Articles, Features, Uncategorised

Cognitiv+ and CMS have partnered to develop a Financial Agreement risk analysis module under the InnovateUK Open Programme.

“In this project we will revolutionise the way people understand and manage their contractual and regulatory obligations that are buried in financial agreements. It is very exciting to work with CMS, as they are the global leaders in M&A transactions, giving us true insights in the market.”

– Vasilis Tsolis, CEO of Cognitiv+

CMS has dedicated involvement in the technology sector and has positioned itself at the forefront of the market in advice to AI startups.

“We’ve a long history in the technology sector and it’s important that we’re participating and contributing as well as advising where we can. There are lots of great software products in the legal tech market; few use true AI but Cognitiv+ is one of those that do.”

– Charles Kerrigan, project partner lead at CMS

Like this post? Subscribe to our weekly newsletter here to be updated with all news privacy, Cognitiv+ and more.

Cognitiv+ exhibits at CogX London 2017!

Georgia Wright Events

Cognitiv+ exhibited at CognitionX‘s inaugural event, CogX in June. Run in association with the The Alan Turing Institute, the event aimed to bring together business leaders, government officials and leading academics through a series of panels, talks and breakout sessions.

CogX was the world’s first event exploring the full impact of artificial intelligence. CognitionX will publish a report on the key issues and areas discussed.

Reports from the day will be published in the CogX website. (Source: CogX)

The event encouraged a healthy debate on the most poignant questions related to artificial intelligence and its potential, including:

  • The State of AI & Impact on Society
  • Impact of AI on Financial Services & The Economy
  • Impact of AI on Politics & Governing
  • Investing in AI: The Good, The Bad, & The Ugly
  • Impact of AI on Cyber Security
  • Impact of AI on Legal & Professional Services

Cognitiv+ were proud to exhibit at the event and partake in the discourse surrounding these key questions. Cognitiv+ demonstrated how businesses and law firms can use its artificial intelligence platform to draw insight from their legal data.

“The UK’s contribution to AI has already been substantial, and with the right support from the startups, corporates, investors, government, academia and the public, it could be the single biggest driver of productivity and growth for the UK economy.”

– Charlie Muirhead, CognitionX Founder and CEO

For the full itinerary of the day, see here.

Like this post? Subscribe to our weekly newsletter here to be updated with all news privacy, Cognitiv+ and more.

Artificial Intelligence and Financial Services: Cognitiv+recognised as a leading disruptor

Georgia Wright Features

In an investigation into the key areas of technology destined to make a big impact on the financial sector, Cognitiv+ has been recognised as a one of the leading disruptors using artificial intelligence. The insight is published by StartUs, the leading startup and innovation network connecting the European startup community.

According to the World FinTech Report 2017, 60% of traditional firms are seeking partnerships with FinTech startups – so it’s not surprising that 22 billion euros was invested in FinTech firms last year. StartUs Insights has recognised Cognitiv+ as a rising star in the industry through a large-scale analysis of 14,000 FinTech startups. Cognitiv+ was selected as a driver of innovation in the financial industry for its application of artificial intelligence.

“Cognitiv+ has built an Artificial Intelligence Engine capable of identifying topics of interest in unstructured text as well as relationships between topics, companies and more.”
– StartUs Magazine

Source: StartUs Magazine

The other key areas of innovation recognised were mobile, blockchain, big data, regulatory technology, open banking application program interfaces and biometrics.

Read the full article here.

Like this post? Subscribe to our weekly newsletter here to be updated with all news privacy, Cognitiv+ and more.