State of Artificial Intelligence (July, 2019)

These are my notes from public reports on Artificial Intelligence:

  1. There has been lots of progress in decreasing the compute time required for training AI models. One of the most comprehensive resources for the benchmarks for end-to-end learning in machine learning is Stanford DAWN Benchmark and Bayesian Deep Learning Benchmarks and GLUE Benchmark

  2. Two very interesting posts by MIRI-

  3. MIRI and OpenAI have been talking about security and alignment problems for a very long time but I think among the top problems there are two key contendors-

    • OpenAI's GPT-2 They recently released an Update: GPT-2 Interim Update, May 2019: We’re implementing two mechanisms to responsibly publish GPT-2 and hopefully future releases: staged release and partnership-based sharing. We’re now releasing a larger 345M version of GPT-2 as a next step in staged release, and are sharing the 762M and 1.5B versions with partners in the AI and security communities who are working to improve societal preparedness for large language models.
    • DeepFakes - CNN did a recent post to explain the danger of DeepFakes. The recent concerns on the topic have been of FakeNudes and its impact on the upcoming elections, esp in terms of FakeNews and FakeVideos.
       
  4. Some interesting game projects to look at from the past year-

  5. The key trends in Deep Learning this year

    • Inverse Reinforcement Learning
    • Curiosity Driven Exploration
    • Meta-Learning
    • Quality Diversity Algorithms (For eg. POET)
    • NLP breakthroughs: Some top mentions would be Google's BERT, Transformer, CLEVR, Allen Institute's ELMo, OpenAI's Transformer, Fast.ai's ULMFit, Microsoft's MT-DNN (Honorary Mentions: Cyc)
       
  6. Top Tools & Libraries-

    • ONNX
    • Facebook Horizon
       
  7. Ongoing Competitions:

Leave a Comment

Your email address will not be published. Required fields are marked *