One of the hot topics today are the growing concerns with privacy and security of the general public. Nothing has caught the masses attention more than the Snowdon case, which questioned governments authority across the world in using technology for mass surveillance and having access to personal data. In fact some critics have brought to light that the public trusts private companies, like Facebook and Google -which openly state that they use personal data for profits by advertising specific content to the user- more so than they trust their own governments.
This could be for a range of reasons; one might be due to transparency, some might be political, other reasons might be due to the lack of security like in the recent NHS hacking scandal. One such example is where Apple refused to give the FBI access to unlocked i-phones as it would give their customers less trust in the companies ability to keep their data safe and would have potentially opened up vulnerabilities regarding their security.
Nonetheless, we are in very different time to that of any other time in history in that Big Data/ Industry 4.0 is a reality that has been made possible due to the high processing capabilities of computers, in both centralised and distributed systems, which have made it possible to manage this massive increase in data with techniques like Virtual Servers, Cloud Databases, Data Mining and NoSQL technologies. There are now cases where governments and cooperations seamlessly combine in order to provide solutions to many of the problems we face in the world. However, this raises some important questions, one of which regards whether this gathering of data is a breach of our security and personal data, and with the increase in hacking scandals, does this put the public at a greater risk.
One of the biggest revolutions, known as the fourth revolution in comparison with the industrial revolution, has dramatic effects on the future and will affect the way everyone lives there day to day lives due to the increase in computer intelligence and the complexity of autonomous machines, multi-agent systems and greater knowledgebase systems. Historically computers have not been capable of processing images and videos in large datasets, but as storage and the amount of memory increases, and the fact that neural networks are becoming more popular than ever, it seems like this is all changing! Literally anyone nowadays can have access to this technology, albeit on a smaller scale, with the use of Python libraries such as Theano or TensorFlow. In fact, deep learning and neural networks have been used to improve healthcare in many disciplines already, and has the potential to revolutionise the field of medicine and every other discipline across multiple industries.
So why do we find ourselves in a dilemma, or this double-edged sword as to speak against privacy and progress? See on the one hand, we all want to improve our healthcare and take advantage of big data, but on the other we do not want to give up our privacy. Whilst it may be true that companies do not explicitly give up personal information, and many take steps in order to anonymise the data, some argue using reverse engineering the identify of individuals could be revealed. This could be a potential problem as the last thing the public want is Insurance companies denying them adequate health care due to private confidential records they shouldn’t of had access to in the first place!
The ICO, Uk’s Information Commission has ruled that the NHS have not protected the privacy of patients when they shared data with Google. They have criticised the Royal Free NHS Foundation Trust for the way they have handled data during medical trials that on the surface are noble, and well intentioned, such as finding a better way to detect kidney injuries, however, the growing concern now is that the hospital did not tell patients about the way their data was being used by Google. It has been revealed that the NHS have provided records on approximately 1.6 million patients to the Google’s DeepMind division last year which will naturally raise public concerns on why their data is being transferred to private companies.
In Google’s defence, the information was used to refine and develop a better diagnosis and detection system that is capable of identifying when patients who are at a particular risk of developing ‘Acute Kidney Injury. As a direct result to these trials an app called Streams has been designed to help doctors find patients who are at a risk of AKI. It does not appear that the data was used for any malicious reasons, but it is very important that the data does not get used for the wrong purposes or gets leaked out to the people who do not have the publics best interest.
Video – Case Study: TensorFlow in Medicine – Retinal Imaging (TensorFlow Dev Summit 2017)
Above is just one of the talks from the TensorFlow Dev Summit 2017 about how neural networks and machine learning software libraries are being used to revolutionise the field of medicine.
Elizabeth Denham (ICO commissioner) has cautioned authorities in that creative use of public data had to be carefully managed and as a result she has made the following statement: “The price of innovation does not need to be the erosion of fundamental privacy rights”.
So what does this mean for the NHS you may ask? As it stand the NHS have not been fined as a result of these investigations, but they have been forced to sign an undertaking to ensure that changes are made in the way they handle patients data. Google have added that they are “pleased” that the Information Commission had let them continue using the Streams app “to help patients”.
The Royal Free have also made a statement saying that they accept the ICO’s findings and have and will continue to make strides of progress to address the areas where they have caused concerns and have stated that “the power of technology to improve care for patients has always been the driving force for the Streams app.”
In a statement, Dominic King (DeepMind’s co-founder) has said that Google have responded by being optimistic and said that they welcomed the “thoughtful resolution” of the case and added that they would reflect more on their involvement within hospitals across the UK. Google have acknowledged that they underestimated the complexity of the NHS, and were not entirely aware of all the legalities surrounding patient data within the UK, as well as the growing public concern of a well-known multinational company working in the interests of public health. They have admitted that the AI division had concentrated on building tools for clinicians rather than thinking about how the project could and should be shaped by the needs of patients and the general public. Therefore Google have acknowledged where they have made mistakes, and claim to be more aware of the social, cultural and political issues surrounding big companies having access to patients data.
The link with artificial intelligence and deep learning is changing the way we see data, currently data is used to optimise search engines amongst many other services. Sharing personal information on the one hand has improved our lives, but in some cases this could be at a cost. Cases like this will raise serious questions on how we will find the balance between avoiding Snowden’s mass surveillance dystopia against acquiring data solely to improve people’s lives.
The big questions on the importance of privacy have only really just begun, as on the one hand having access to this data could save lives, yet misusing the data could have other unintentional consequences, only time will tell what kind of future we will be living in across the next decades to come.