Official Blog

DataCamp Digest July 2021: AI regulations? Sandboxes seem to be the solution

Read our favorite articles from the last month.

DataCamp Digest is our newsletter aimed at providing the most up-to-date insights and news on all things data science. In this edition of the newsletter, we discuss the dark side of algorithmic hiring, the rise of AI sandboxes for compliant innovation, our future with AI, and more.

Will AI give us the lives of leisure we long for? | Podcast: The Ezra Klein Show

Sam Altman is the CEO of OpenAI, the research lab that developed and rolled out GPT-3. Sam is a big believer in Moore’s Law, which posits that computing power doubles every two years despite shrinking computer costs.. Altman predicts that the advent of highly intelligent agents foreshadowed by GPT-3, will generate a “Moore’s Law for everything”, unlocking greater quality and more affordable housing, health care, education, you name it. Listen to the show to learn more about Altman views on the future of AI.

To regulate AI, try playing in a sandbox | Emerging Tech Brew

There is rising interest in using “regulatory sandboxes” to govern AI without hamstringing innovation. These allow organizations to develop and test new technologies in a low-stakes, monitored environment before rolling them out to the general public. How will this change the AI landscape? Read more to find out.

What Google’s AI-designed chips tells us about the nature of intelligence | TNW Neural

Google’s AI research team built a reinforcement learning model that helps the company design more efficient chips. As the components in a chip grow, finding the most efficient location for each component is a major challenge. Learn how Google leveraged human and artificial intelligence to solve this problem.

Hired by an algorithm | MIT Technology Review

In this podcast, the CEO of ZipRecruiter and one of LinkedIn’s architects behind its algorithmic job-matching system discussed the advantages and disadvantages of algorithmic recruitment practices. Even though software can accelerate the recruitment processes, algorithms could have biased results based on race, gender, and in at least one case, whether you played lacrosse in high school.

AI can now emulate text style in images using just a single word | Facebook AI

The team at Facebook AI developed a self-supervised learning algorithm that can emulate the style of a text in a photo using just one training example. Soon enough, editing highly stylized or handwritten text in images will be as simple as copy paste.

Global AI Vibrancy Tool | Stanford University Human Centered AI

The Global AI Vibrancy Tool is an interactive visualization that allows cross-country comparison for up to 26 countries across 22 indicators. The tool provides transparent evaluation of the relative position of countries based on wide-ranging categories such as research, economy and inclusion.

How Airbnb standardized metric computation at scale Part I | The Airbnb Tech Blog

As data warehouses keep expanding, the challenge of creating, managing, computing, and distributing data across different departments and teams keeps growing. Learn why Airbnb built Minerva and how the software allowed them to turn data into actionable strategies. Make sure you read Part II of the story.

9 Ethical AI Principles for Organizations to Follow | World Economic Forum

An increasing number of organizations are beginning to craft ethical charters and principles that will guide their AI development. In this article, the responsible AI team at PwC lays out 9 ethical principles for developing AI that can be leveraged by any organization today.

Exploring Data at Netflix | The Netflix Tech Blog

Providing easy access to relevant data to a variety of data roles across an organization is no easy feat. In this article, the team at Netflix outlines how they scaled access to data with their internal Data Explorer tool, and how they’re open-sourcing it for the world to use.

IVY: The templated deep learning framework | Deep Learning Weekly

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases by wrapping the functional APIs of existing frameworks. It currently supports Jax, TensorFlow, PyTorch, MXNet and Numpy.

Orbit, an open source package for time series interference and forecasting | Uber Engineering

Orbit is a newly developed package by the team at Uber for Bayesian time series modeling. The goal behind Orbit is to create a tool that is easy to use, flexible, interitible, and high performing while allowing for easy model specification and analysis but not limiting itself to a small subset of models.

Gradio: Create quick UIs for prototyping your ML model | The Gradio Team

Creating UIs for testing machine learning models can be time consuming sometimes. Gradio is an open source tool that lets you create simple interfaces to demo your models.

Webinar: Developing an AI literate nation

In this fireside chat, Laurence Liew, Director of AI Innovation, and Koo Sengmeng, Senior Deputy Director of AI Innovation at AI Singapore will deep dive into AI Singapore’s mission and how it is accelerating AI Adoption across the nation.

White Paper: Data literacy for responsible AI

In this white paper, co-written with the trusted AI team at DataRobot we outline the importance of developing responsible AI, practical solutions data teams and organizations can adopt, and the crucial role data literacy plays when scaling responsible AI.

Podcast: #64 Creating trust in data with data observability

In this episode of DataFramed, Adel speaks with Barr Moses, CEO, and co-founder of Monte Carlo on the importance of data quality and how data teams can leverage data observability to create trust in data. Make sure to listen and subscribe on your favorite podcasting app.