Artificial intelligence productivity has risen steadily in recent years, from roughly 1.5 percent in 2010 to a high of 2.2 percent in 2014.
However, that number has been declining steadily, with the last time it hit a peak was back in 2017.
In the past, companies like IBM and Microsoft have argued that AI has to be used to automate tasks that previously required humans, but that’s not necessarily the case.
For example, IBM has argued that the machine learning needed to run the Watson supercomputer in its Watson lab is a lot easier than it would be for someone to learn it from scratch.
Meanwhile, Microsoft argues that AI systems like Cortana and its Alexa voice assistant are better at what they do than the people they’re supposed to be helping.
These are valid points.
However the reality is that AI is still in its infancy, and it will take a while for AI to really start to take over the world.
So how are these projects doing?
In 2017, a new study from the Center for Business and Economic Research looked at the productivity of several AI projects, which is how they measure up to the rest of the world in terms of output.
The research team used a number of metrics to gauge how well these projects were doing.
One metric, called a “per-project” metric, takes a number from 0 to 100.
For instance, the team calculated that a project with a “zero” productivity score would have an output of 0.
So it looks at projects that produce 0.5 or more per-project and a project that produces 0.3 or less per-site.
In other words, a project like Deepmind’s AlphaGo, which beat a human at Go long before the human reached AlphaGo’s level, has a “performance score” of 0, meaning that the project’s productivity is 0.0.
It also took into account how much time it took to develop and test the AI.
The team also looked at how well the projects are achieving on a scale of one to 10, which indicates how many people in a given project would be able to solve the problem with the help of an AI.
If a project was achieving a 0.1 on this scale, that’s a sign of a good AI project.
And if it was achieving 0.2 or less, that indicates that the AI project is in trouble.
The results are also pretty telling, and show that the number of projects that are producing 0.8 per-projects is much higher than what the world’s largest AI research organization, the AI Alliance, has reported.
For the first time, the project that’s outperforming the world on a per-productions basis is Google.
The project that is the world leader on this metric is Google’s AI lab.
And the project at the bottom of the list, DeepMind, has the lowest per-sites score.
As the graph below shows, the research team found that AI projects have achieved very high scores in terms on the per-per-projects scale.
However at the same time, there are also projects that have achieved low scores.
For a project of that size, it means that there are a lot of projects where the AI team isn’t doing a great job.
The reason is that the team has to keep up with what the AI system is learning and it can be very slow.
So if the team is getting slower, the results are less accurate.
The data also shows that projects that make it to 10 percent or more productivity, on average, are getting about 1.0 percent of the total AI project output.
But projects that score below 10 percent productivity are getting around 0.7 percent of AI project productivity.
For projects that achieve 0.9 percent productivity, the average score is around 0, or 0.6 percent of their AI project outputs.
And for projects that do worse than that, the performance score is closer to 0, 0.4 percent of output, or around 0 percent of project output per person.
It’s important to note that these scores are per-person, so that a 0 percent productivity score doesn’t mean that the entire team of people is making the AI software that they’re working on, but it does mean that some of them are.
Google’s Alpha Go project has been the most successful of these projects, and the team that created it was one of the fastest in the world to get the technology working.
The other projects that achieved these high scores on per-programs metrics are Microsoft’s Cortana and Amazon’s Alexa, which were the leaders on this score.
However Amazon’s project, which has had a very slow start, is also a high-performing project.
Amazon’s Alphago project has a higher per-user score than Microsoft’s Alpha, but the gap between the two is smaller.
Amazon also had a much slower start than Google.
Google also started with a low-per user score and it did well for a long time.
The researchers then asked what would happen if Amazon went from