Loading…
PAPIs Europe 2018 has ended
PAPIs is the 1st series of international conferences dedicated to real-world ML applications, techniques and tools. After 7 previous events on 4 different continents, PAPIs is returning to Europe on 4-6 April 2018. Join us at the Canary Wharf Tower in London!

  • Training Workshops — 4 April, UCL School of Management (Level 38)
  • Industry and Startups — 5 April, Level 39
  • Industry and Research — 6 April, Level 39

More information about the conference at papis.io/europe-2018
View analytic
Wednesday, April 4 • 09:00 - 17:30
"GDPR: is your ML system ready?" — Workshop brought to you by Ercom

Sign up or log in to save this to your schedule and see who's attending!

Feedback form is now closed.
This training workshop will take place before the main conference. It will be given in a small classroom, to maximize interaction and so you can ask even more questions than in a conference setting. This is an agnostic workshop which is supported by Ercom; they develop simple and secured business solutions for enterprise file sharing and mobile communications.
-----

Implementing machine learning applications with interpretability, accountability, and trust

The EU General Data Protection Regulation (GDPR) enters into effect in May 2018. Its enforcement will dramatically change the way European companies handle data relating to individuals, and global companies handle data relating to EU residents. This is a big step towards more transparency over data, however, it implies several major challenges for data science practitioners. In particular it brings a need to understand how machine learning systems are profiling users, the avoidance of bias and discrimination on the basis of categories of data such as racial origin or political opinion. Particular focus has been given to the inclusion of a “right to an explanation” which sits alongside older requirements to inform users about the logics of automated processing which significantly affects them. Ensuring that machine learning systems are sufficiently transparent, and that they are not inferring sensitive characteristics without legal ground to do so, will impact the entire machine learning pipeline, and especially high dimensional machine learning practice and systems utilising inherently opaque methods such as deep neural networks and model ensembles (e.g. XGBoost).

Start preparing now your data science skills required to the be compliant with GDPR so that you can ensure your business doesn’t get behind.
You will learn:
  • how the GDPR rules will affect the way data teams do their work;
  • how the regulations will evolve over time, and how to best prepare for that now;
  • state-of-the-art safeguards against machine learning bias and discrimination;
  • how to use new machine learning procedures that ensure predictive power, explainability and trust to automated decisions, such as the creation of interpretable models, and how to apply techniques like LIME to audit and explain black-box models. 
Moreover, you will gain straightforward insights to demonstrate the behavior of a predictive model to stakeholders and regulators which brings fair and transparent decisions.


Outline
First part: GDPR overview
  • How to address immediate challenges for data teams: data governance, data protection, privacy by design, data subject requests (right to be forgotten, access, data portability, justification of decisions decided by an algorithm)
  • The impact of GDPR on algorithms
  • How to close the gaps to become GDPR compliant from data science perspective

Second part : Applied interpretable and fairness-aware machine learning, includes hands-on labs
  • What is interpretability and why it is important
  • De-biasing, the state-of-the-art and its limitations
  • Interpretable models
  • Explaining black-box models


Intended audience 
  • You are a Data Scientist, Data Analyst, aspiring or experienced, who wants to learn implications of GDPR in your data science process and how to tackle them.
  • You are a Data Engineer or Developer with some Machine Learning exposure who wants to learn how to implement typical compliant Data Science and ML workflows.
  • You are a Data Science or Data Engineering Manager, CTO, CIO or technical decision maker who aims to better understand the impact of GDPR on the data-driven enterprise.


Prerequisites
Some experience coding in Python or R and a basic understanding of data science topics and terminology are recommended. Experience using with data processing, feature generation, statistical modeling, and most common machine learning algorithms (linear regressions, trees) is helpful.


Hardware and/or installation requirements
The course will be run in R. Attendees should being a laptop with a recent R installation. An IDE such as RStudio would be useful. Attendees who wish to use Python may do so, and some resources will be provided for this. The first part of the course has no computing requirements.


Speakers
avatar for Michael Veale

Michael Veale

Researcher, UCL Department of Science, Technology, Engineering & Public Policy
I’m a technology policy researcher at University College London. I research:on-the-ground issues and design challenges for fairness, transparency and resilience of high-stakes algorithmic systems, particularly in the public sector;machine learning and privacy-enhancing technologies, and their intersection with European data protection law (e.g. the GDPR & ePrivacy);administrative and institutional challenges in governing fast-moving, digital technologies.I previously worked at the... Read More →


Wednesday April 4, 2018 09:00 - 17:30
Level38 - UCL Seminar Suite Level38, One Canada Square, London