/r/mlpapers

Photograph via snooOG

A subreddit for weekly machine learning paper discussions. Started by the people from /r/MachineLearning

If you want to get started with Machine Learning, try /r/LearnMachineLearning

A subreddit for weekly machine learning paper discussions. Started by the people from /r/MachineLearning

/r/mlpapers

7,529 Subscribers

1

Implementing GitHub Machine Learning Paper repo.

Can I kindly steal someone's time here to help me on how to implement the ML code paper repo on Google Colab.
It would be nice to at least just guide me on what to do or send me some links that would help.

Much Appreciated.

0 Comments
2023/12/11
08:55 UTC

9

Google announces 2.2M new materials discovered using GNN

Materials discovery is critical but tough. New materials enable big innovations like batteries or LEDs. But there are ~infinitely many combinations to try. Testing for them experimentally is slow and expensive.

So scientists and engineers want to simulate and screen materials on computers first. This can check way more candidates before real-world experiments. However, models historically struggled at accurately predicting if materials are stable.

Researchers at DeepMind made a system called GNoME that uses graph neural networks and active learning to push past these limits.

GNoME models materials' crystal structures as graphs and predicts formation energies. It actively generates and filters candidates, evaluating the most promising with simulations. This expands its knowledge and improves predictions over multiple cycles.

The authors introduced new ways to generate derivative structures that respect symmetries, further diversifying discoveries.

The results:

  1. GNoME found 2.2 million new stable materials - equivalent to 800 years of normal discovery.
  2. Of those, 380k were the most stable and candidates for validation.
  3. 736 were validated in external labs. These include a totally new diamond-like optical material and another that may be a superconductor.

Overall this demonstrates how scaling up deep learning can massively speed up materials innovation. As data and models improve together, it'll accelerate solutions to big problems needing new engineered materials.

TLDR: DeepMind made an AI system that uses graph neural networks to discover possible new materials. It found 2.2 million candidates, and over 300k are most stable. Over 700 have already been synthesized.

Full summary available here. Paper is here.

1 Comment
2023/11/30
02:30 UTC

4

[P] Will Tsetlin machines reach state-of-the-art accuracy on CIFAR-10/CIFAR-100 anytime soon?

0 Comments
2023/09/13
13:23 UTC

3

Voicebox From Meta AI Gonna Change Voice Generation & Editing Forever - Can Eliminate ElevenLabs

1 Comment
2023/06/16
23:02 UTC

5

AI Learns How To Play Physically Simulated Tennis At Grandmaster Level By Watching Tennis Matches - By Researchers from Stanford University, NVIDIA, University of Toronto, Vector Institute, Simon Fraser University

0 Comments
2023/05/03
22:19 UTC

5

Hello. I am looking for a way to improve audio quality of older videos - perhaps audio super resolution - or any other ways

Hello everyone. I am a software engineering assistant professor at a private university. I have got lots of older lecture videos on my channel.

I am using NVIDIA broadcast to remove noise and it works very well.

However, I want to improve audio quality as well.

After doing a lot of research I found that audio super-resolution is the way to go

The only github repo I have found so far not working

Any help is appreciated

How can I improve speech quality?

Here my example lecture video (noise removed already - reuploaded - but sound is not good)

C# Programming For Beginners - Lecture 2: Coding our First Application in .NET Core Console

https://youtu.be/XLsrsCCdSnU

0 Comments
2023/02/15
21:26 UTC

2

Help needed in interpretation of a paper's data preparation.

I'm trying to build a neural network for unsupervised anomaly detection in logfiles and found and interesting paper, but I'm not sure how to prepare the data. Maybe that's because I am not a native English speaker.

[Unsupervised log message anomaly detection]

https://www.sciencedirect.com/science/article/pii/S2405959520300643

I will write down in chunks and try to interpret it.

It says under 2.3 Proposed model (page 3 bottom) the following :

  1. Tokenize and change letters to lower case - Meaning: separate by words and change to lower case
  2. Sentences are padded into 40 words - If a row has fewer than 40 word we add some special character (like '0') as placeholder for the remaining words.
  3. sentences below 5 words are eliminated - Trivial
  4. Word frequency than calculated and the data is shuffled - ????
  5. Data normalized between 0 and 1 - I don't really understand what is the data

I cannot really follow at step 4. It would be great if you could help me!

0 Comments
2023/01/12
10:01 UTC

5

[R] Do we really need 300 floats to represent the meaning of a word? Representing words with words - a logical approach to word embedding using a self-supervised Tsetlin Machine Autoencoder.

0 Comments
2023/01/03
18:57 UTC

3

Now Find and Filter Papers by Code Availability

Your suggestions, comments, and candid feedback would be highly welcome!

Here's what it looks like in action:

Input (with code filter on): "photo style transfer"https://www.catalyzex.com/search?query=photo%20style%20transfer&with_code=true

Output: list of all "photo style transfer" papers with corresponding code implementations linked

https://preview.redd.it/femhbuklcyi91.png?width=2894&format=png&auto=webp&s=c794c351cba0e16e895bad478588d1ee0aa39655

Video of it in action:

https://reddit.com/link/wtl9dl/video/mnzdgm58hyi91/player

1 Comment
2022/08/21
00:00 UTC

4

[R] New paper on autonomous driving and multi-task: "HybridNets: End-to-End Perception Network"

0 Comments
2022/03/18
21:52 UTC

3

Fully interpretable logical learning and reasoning for board game winner prediction with Tsetlin Machine obtain 92.1% accuracy on 6x6 Hex boards.

Logical learning of strong and weak board game positions

The approach learns what strong and weak board positions look like with simple logical patterns, facilitating both global and local interpretability, as well as explaining the learning steps. Our end-goal in this research project is to enable state-of-the-art human-AI-collaboration in board game playing through transparency. Paper: https://arxiv.org/abs/2203.04378

1 Comment
2022/03/10
08:18 UTC

7

NeurIPS 2021 - Curated papers - Part 2

In part-2 , I have discussed following papers :

  1. Probing Inter-modality: Visual Parsing with Self-Attention for Vision-Language Pre-training
  2. Attention Bottlenecks for Multimodal Fusion
  3. AugMax: Adversarial Composition of Random Augmentations for Robust Training
  4. Revisiting Model Stitching to Compare Neural Representations

https://rakshithv-deeplearning.blogspot.com/2021/12/neurips-2021-curated-papers-part2.html

0 Comments
2021/12/28
17:28 UTC

6

Running Collaborative Machine Learning Experiments - Guide

Sharing experiments to compare machine learning models is important when you're working with a team of ML engineers - to share a modified dataset or even share the exact reproduction of a specific experiment.

The following guide shows how your can bundle your data and code changes for each experiment (using Git, Data Version Control, and Google Cloud) and push those to a remote for somebody else to check out: Running Collaborative Experiments

0 Comments
2021/12/20
10:04 UTC

5

Steerable discovery of neural audio effects

Paper: https://arxiv.org/abs/2112.02926

Abstract:

Applications of deep learning for audio effects often focus on modeling analog effects or learning to control effects to emulate a trained audio engineer. However, deep learning approaches also have the potential to expand creativity through neural audio effects that enable new sound transformations. While recent work demonstrated that neural networks with random weights produce compelling audio effects, control of these effects is limited and unintuitive. To address this, we introduce a method for the steerable discovery of neural audio effects. This method enables the design of effects using example recordings provided by the user. We demonstrate how this method produces an effect similar to the target effect, along with interesting inaccuracies, while also providing perceptually relevant controls.

Repo with video demo & Colab examples: https://github.com/csteinmetz1/steerable-nafx

Submission statement: This has already been making the rounds on a few other subs, but I thought that this was an interesting conference abstract and project. I'm personally interested in the potential for driving a similar process in reverse, i.e., removing distortion rather than adding it. If anyone else has read any good papers pertaining to audio restoration recently, let me know! (I have a pet project to eventually restore some very low-quality audio of a deceased relative, so I've been loosely keeping tabs on ML audio processing, but it's not my primary area.)

1 Comment
2021/12/16
04:40 UTC

7

BEIT: BERT Pre-Training of Image Transformers

https://rakshithv.medium.com/beit-bert-pre-training-of-image-transformers-e43a9884ec2f

BERT like architecture for training a vision models. Vision transformers make use of idea of using a image patch in analogous with text token.
Whereas BEiT also formulates a objective function similar to MLM, But predicting a masked image patch of 16*16 patch which can take 0 to 255 is challenging.
Hence they make use of image tokenizers for prediction instead of predicting a overall patch.
BEiT takes relatively less data for pre-training compared to vision transformers .

In this blog, I tried to put together my understanding of the paper.

0 Comments
2021/09/12
15:53 UTC

5

What are some good review articles to start learning about ML application in Biomedical disciplines?

I have been working in ML for some time now, and want to start learning about its applications in the biomedical domain. What would be some good starting points?

1 Comment
2021/08/23
09:28 UTC

4

The future of autonomous robots in factories - Autonomous Robotic Cutting!

0 Comments
2021/07/17
21:59 UTC

4

[D] Charformer Paper Explained and Visualized: Fast Character Transformers via Gradient-based Subword Tokenization

1 Comment
2021/06/30
15:16 UTC

9

ProteinBERT: A universal deep-learning model of protein sequence and function

0 Comments
2021/05/30
09:02 UTC

1

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

0 Comments
2021/05/23
14:26 UTC

2

MLP-Mixer: An all-MLP Architecture for Vision

0 Comments
2021/05/23
14:25 UTC

3

Emerging Properties in Self-Supervised Vision Transformers (DINO)

0 Comments
2021/05/23
14:24 UTC

4

[R] A Review of "Neural Anisotropy Directions" (2020)

1 Comment
2021/05/21
11:49 UTC

8

[P] Browse the web as usual and you'll start seeing code buttons appear next to papers everywhere. (Google, ArXiv, Twitter, Scholar, Github, and other websites). One of the fastest-growing browser extensions built for the AI/ML community :)

0 Comments
2021/04/17
23:31 UTC

6

PET, iPET, ADAPET papers explained! “Small language models are also few-shot learners”. Paper links in the comment section and as always, in the video description.

1 Comment
2021/04/02
11:54 UTC

2

New Pre-Print: Bio-Inspired Robustness: A Review

Hello everyone,

We recently added a new pre-print on how human visual system-inspired components can help with adversarial robustness. We study recent attempts in the area and analyze their properties and evaluation criteria for robustness. Please let us know what you think of the paper and any feedback is highly appreciated!!! :)

P.S Please forgive the word format TT TT, first and last time I do this in my life. Else it's Latex all the way.

Title: 'Bio-Inspired Robustness: A Review '

Arxiv link: https://arxiv.org/abs/2103.09265

Abstract: Deep convolutional neural networks (DCNNs) have revolutionized computer vision and are often advocated as good models of the human visual system. However, there are currently many shortcomings of DCNNs, which preclude them as a model of human vision. For example, in the case of adversarial attacks, where adding small amounts of noise to an image, including an object, can lead to strong misclassification of that object. But for humans, the noise is often invisible. If vulnerability to adversarial noise cannot be fixed, DCNNs cannot be taken as serious models of human vision. Many studies have tried to add features of the human visual system to DCNNs to make them robust against adversarial attacks. However, it is not fully clear whether human vision-inspired components increase robustness because performance evaluations of these novel components in DCNNs are often inconclusive. We propose a set of criteria for proper evaluation and analyze different models according to these criteria. We finally sketch future efforts to make DCCNs one step closer to the model of human vision.

0 Comments
2021/03/25
14:24 UTC

9

From MIT CSAIL researchers! Create novel images using GANs! (checkout where they create a new face using faces of 4 different people)

0 Comments
2021/03/24
20:11 UTC

Back To Top