/r/MachinesLearn

Photograph via snooOG

This is a subreddit for machine learning professionals. We share content on practical artificial intelligence: machine learning tutorials, DIY, projects, educative videos, new tools, demos, papers, and everything else that can help a machine learning practitioner in building modern AI systems. r/MachinesLearn is a machine learning community to which you enjoy belonging.

This subreddit is for machine learning professionals. We share advances in artificial intelligence, machine learning tutorials, new tools, demos, papers, and everything else that can help in building modern AI systems.

We on Twitter: @r_MachinesLearn

/r/MachinesLearn

11,739 Subscribers

21

[R] Baidu’s 10-Billion Scale ERNIE-ViLG Unified Generative Pretraining Framework Achieves SOTA Performance on Bidirectional Vision-Language Generation Tasks

Baidu researchers propose ERNIE-ViLG, a 10-billion parameter scale pretraining framework for bidirectional text-image generation. Pretrained on 145 million (Chinese) image-text pairs, ERNIE-ViLG achieves state-of-the-art performance on both text-to-image and image-to-text generation tasks.

Here is a quick read: Baidu’s 10-Billion Scale ERNIE-ViLG Unified Generative Pretraining Framework Achieves SOTA Performance on Bidirectional Vision-Language Generation Tasks.

The paper ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation is on arXiv.

1 Comment
2022/01/07
15:18 UTC

8

[ShareMyResearch] Drift with Devil: Security of Multi-Sensor Fusion based Localization in High-Level Autonomous Driving under GPS Spoofing

Content provided by Junjie Shen, the first-author of the paper Drift with Devil: Security of Multi-Sensor Fusion based Localization in High-Level Autonomous Driving under GPS Spoofing.

In this work, we perform the first study on the security of MSF-based localization in AV settings. We find that the state-of-the-art MSF-based AD localization algorithm can indeed generally enhance the security, but have a take-over vulnerability that can fundamentally defeat the design principle of MSF, but only appear dynamically and non-deterministically. Leveraging this insight, we design FusionRipper, a novel and general attack that opportunistically captures and exploits take-over vulnerabilities. We perform both trace-based and simulation-based evaluations, and find that FusionRipper can achieve >= 97% and 91.3% success rates in all traces for off-road and wrong way attacks respectively, with high robustness to practical factors such as spoofing inaccuracies.

0 Comments
2021/01/22
16:21 UTC

51

Book release: Machine Learning Engineering

Hey. I'm thrilled to announce that my new book, Machine Learning Engineering, was just released and is now available on Amazon and Leanpub, as both a paperback edition and an e-book!

I've been working on the book for the last eleven months and I'm happy (and relieved!) that the work is now over. Just like my previous The Hundred-Page Machine Learning Book, this new book is distributed on the “read-first, buy-later” principle. That means that you can freely download the book, read it, and share it with your friends and colleagues, before buying.

The new book can be bought on Leanpub as a PDF file and on Amazon as a paperback and Kindle. The hardcover edition will be released later this week.

Here's the book's wiki with the drafts of all chapters. You can read them before buying the book: http://www.mlebook.com/wiki/doku.php

I will be here to answer your questions. Or just read the awesome Foreword by Cassie Kozyrkov!

https://preview.redd.it/9swslee6i8m51.jpg?width=1600&format=pjpg&auto=webp&s=bf83d13301b1f381cd5e1e1544bec4e8a1fff8c7

4 Comments
2020/09/10
02:49 UTC

20

[R] Google ‘BigBird’ Achieves SOTA Performance on Long-Context NLP Tasks

To alleviate the quadratic dependency of transformers, a team of researchers from Google Research recently proposed a new sparse attention mechanism dubbed BigBird. In their paper Big Bird: Transformers for Longer Sequences, the team demonstrates that despite being a sparse attention mechanism, BigBird preserves all known theoretical properties of quadratic full attention models. In experiments, BigBird is shown to dramatically improve performance across long-context NLP tasks, producing SOTA results in question answering and summarization.

Here is a quick read: Google ‘BigBird’ Achieves SOTA Performance on Long-Context NLP Tasks

The paper Big Bird: Transformers for Longer Sequences is on arXiv.

5 Comments
2020/08/03
18:57 UTC

6

A big update to the "Papers with Code" database of results from papers, now with 2500+ leaderboards and 20,000+ results

Link to the website and the paper on the methodology.

1 Comment
2020/05/20
20:45 UTC

20

Pose Animator: a web animation tool that brings SVG illustrations to life with real-time human perception TensorFlow.js models

1 Comment
2020/05/20
19:35 UTC

19

Google Brain & CMU Semi-Supervised ‘Noisy Student’ Achieves 88.4% Top-1 Accuracy on ImageNet

Very impressive results:

The research team says their proposed method’s 88.4 percent accuracy on ImageNet is 2.0 percent better than the SOTA model that requires 3.5B weakly labelled Instagram images. And that’s not all: “On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2.”

A quick read: Google Brain & CMU Semi-Supervised ‘Noisy Student’ Achieves 88.4% Top-1 Accuracy on ImageNet

The paper: Self-training with Noisy Student improves ImageNet classification

5 Comments
2020/02/13
17:03 UTC

8

AAAI 2020 | What’s Next for Deep Learning? Hinton, LeCun, and Bengio Share Their Visions

The trio of researchers have made deep neural networks a critical component of computing, and in individual talks and a panel discussion they discussed their views on current challenges facing deep learning and where it should be heading.

Read more

0 Comments
2020/02/11
03:35 UTC

1

State of the art in image inpainting!

0 Comments
2020/02/10
21:55 UTC

7

ICYMI from Tencent researchers: Real-time, high-quality video object segmentation!

0 Comments
2020/02/08
22:58 UTC

3

Latest from Intel researchers on object detection!

0 Comments
2020/02/07
03:51 UTC

2

State of the art in image to image translation (guided)

0 Comments
2020/02/07
03:24 UTC

26

Machine Unlearning: Fighting for the Right to Be Forgotten

In a new paper, researchers from the University of Toronto, Vector Institute, and University of Wisconsin-Madison propose SISA training, a new framework that helps models “unlearn” information by reducing the number of updates that need to be computed when data points are removed.

Read more.

1 Comment
2020/02/05
20:33 UTC

3

Future of fashion design: Generate a new garment that seamlessly integrates the desired design attribute to the reference image

0 Comments
2020/02/04
21:45 UTC

15

Change My Mind: Deep learning isn’t ready to be used for conversational AI systems

Google’s Meena was released in a preprint recently stating that it could create its own joke, but the threat of racism in the system and its logical inconsistencies aren’t ready to be deployed in a corporate environment. Change my mind

13 Comments
2020/02/04
21:36 UTC

1

Just in: A new comprehensive object detection dataset for detecting parking stickers on cars!

0 Comments
2020/02/04
20:57 UTC

12

Tutorial: Image Compression Using Autoencoders in Keras

In this tutorial author and teacher Ahmed Fawzy Gad covers a thorough introduction to autoencoders and how to use them for image compression in Keras.

Article link: https://blog.paperspace.com/autoencoder-image-compression-keras/

0 Comments
2020/02/02
17:14 UTC

22

ICYMI from Nvidia researchers: Produce a 3D object from a 2D image (in less than 100 milliseconds!)

0 Comments
2020/02/01
04:38 UTC

3

How do you analyze the distribution of scores produced from a binary classification model?

How do you analyze the distribution of scores produced from a binary classification model to make sure it makes sense?

I am using a decision tree to predict how likely an individual is to vote or not. One idea is to analyze the splits of the tree to see why an individual was given that score. For example, people that got a score below 25% had these characteristics, people that got a score between 25-50% had these characteristics, etc. Is there a better way to do it?

1 Comment
2020/01/30
21:49 UTC

Back To Top