/r/Automate
A place for the discussion of automation, additive manufacturing, robotics, AI, and all the other tools we've created to enable a global paradise free of menial labor. All can share in our achievements in a world where food is produced, water is purified, and housing is constructed by machines.
Welcome to /r/Automate!
A place for the discussion of automation, additive manufacturing, robotics, AI, and all the other tools we've created to enable a global paradise free of menial labor. All can share in our achievements in a world where food is produced, water is purified, and housing is constructed by machines.
Once, a long time ago, a shovel was a marvel of modern technology. Today, a shovel is $3.99 at the hardware store. Who says the same won't happen with robotics ?
Other information
Reddiquette is an informal expression of the values of many redditors, as written by redditors themselves. Please abide by it the best you can.
Related Subreddits
/r/3DPrinting
/r/Aihub
/r/AINotHuman
/r/Anticonsumption
/r/Antiwork
/r/Arcology
/r/Artificial
/r/BasicIncome
/r/Botsrights
/r/CanadianFuturistParty
/r/Computervision
/r/ConcentrationOfWealth
/r/CryptoCurrencies
/r/Dailyscripts
/r/DarkFuturology
/r/DomesticRobots
/r/Driverless
/r/Electronics
/r/EngineeringPorn
/r/Farmtech
/r/FullAutoCapitalism
/r/Futuretimeline
/r/Futures_Studies
/r/Futurism
/r/FuturistParty
/r/Futurology
/r/HackedGadgets
/r/Homeautomation
/r/Learnprogramming
/r/Lostgeneration
/r/Luddite
/r/MachineLearning
/r/MachinesLearn
/r/Manna
/r/Manufacturing
/r/PersonalAutomation
/r/PLC
/r/PostScarcity
/r/Programming
/r/Rad_Decentralization
/r/Redditscripting
/r/Robotics
/r/RoboticLawnmowers
/r/RobotsWithJobs
/r/RPA
/r/Sciences
/r/SelfDrivingCars
/r/Sentientautocorrect
/r/ShittyRobots
/r/Simulate
/r/Socialcitizens
/r/Technocracy
/r/Tech
/r/Technology
/r/Technostism
/r/TheVenusProject
/r/TookOurJerbs (NEW)
/r/Totallynotrobots
/r/Vyrdism
/r/Automate
Hello,
I want to automate the following process and I don't know if I should use Python or Power Automate and then use a Virtual Machine or Azure Functions (maybe another approach?)? I have a 365 Microsoft Business Standard license.
I want the following process to be fully automated and launched every day at a certain time without my intervention and my laptop turned on.
Thanks in advance for your help!
Hey folks, just stumbled upon something pretty cool and had to share! Ever thought about creating your own hit song but felt like you lacked the musical chops? Well, it seems like AI is coming to the rescue once again! There's this new tool called Suno AI that's all about helping us unleash our inner pop star with some smart tech assistance. Imagine meshing your lyrics with AI-generated tunes. Could be the next big thing at your karaoke night, right?
On another note, have you heard about the Rabbit R1? It's this new release that's supposed to revolutionize how we interact with objects via AI. They're using something called LiDAR-Camera fusion for object detection, which sounds like it’s straight out of a sci-fi movie. Definitely keeping an eye on how that unfolds!
Both of these advancements make you wonder about the scope and future of AI in our daily lives. It's like every day, there's something new popping up that was once considered just a wild idea. Thoughts?
P.S. This post was curated with some help from Thinkforce AI Agent / Minds. These guys offer free access to some neat tools like GPT-4 for language generation and Stable Diffusion 3 for image generation, and you can even train the agent with your own data. Check them out at www.thinkforce.ai if you're curious or want to get your hands on some AI magic for your projects!
Here are the top AI News Today:
That's it for AI News today, fair bit of news
Greetings,
My problem statement is that I need to control a Schunk gripper through LABVIEW. Now the issue is that the Gripper only responds to the Control Center app provided by the Company. Now my question is, is there any way to control the control center software in such a way that we can control and give input variables to the software from the LABVIEW? I am comparatively new to this section so any kind of guidance will be very helpful.
Hi folks,
I wanted to share with you a cool project I recently undertook that leverages the power of AI to help manage my YouTube channel!
The idea was to use CrewAI to automate tasks like competitor YouTube channel analysis and identify trending topics. This way, I could gauge these topics against my own content ideas to see if there is general interest in a given topic.
The AI Crew was designed to crawl the web (Google) and call APIs like the YouTube API, Reddit API, and use Google Trends to determine how likely a given topic is to generate engagement.
For this, I created the following AI Assistants (or agents in CrewAI lingo):
I used a pretty straightforward setup that relied on the usual suspects:
I tested it with models like GPT-4, GPT-4-Turbo, and a few local models like nous-hermes 2, mistral, and codellama, among others.
The results from GPT-4-Turbo were AMAZING, and I'm sure I can make them better by fine tuning the data going into the model, but they were not really that great with local AI, which was quite expected given the imense number of tokens. However I was quite positively surprised by the performance of Nous Hermes 2 - 13b. Not only did it actually run, but it used the tools I custom build for it! Quite impressive really
The video is available below:
https://youtu.be/5JoVeYcxgpU?si=cxFwHO1x_zCghMYB
You are more than welcome to try out the code for yourselves: https://github.com/fmiguelmmartins/crewaiyoutube
And here is an article on Medium with the step-by-step process (don't worry, I have a free account):
https://medium.com/@fmiguelmmartins/create-an-ai-team-to-manage-your-youtube-channel-5dc1e6c9b31b
Hope you guys enjoy it, and if you are kind enough, please leave me some feedback so I can improve over time!
Thank you!
Filipe
Ray-Ban Meta Smart Glasses have been upgraded to incorporate multimodal AI capabilities, marking a significant advancement in wearable technology. This enhancement enables users to interact with a diverse range of information formats, including photos, audio, and text.
Key points:
PS: If you enjoyed this post, you’ll love my ML-powered newsletter that summarizes the best AI/tech news from 50+ media sources. It’s already being read by hundreds of professionals from OpenAI, HuggingFace, Apple…
I manage a small Meta Ads account for my client, and we typically generate up to 3500 leads per month. I'm interested in automating the process of transferring new leads into a Google Sheet. However, I'm uncertain if the tool I'm using, Pably, can extract additional fields such as the campaign name, Adset Name, and Ad name, in addition to the basic fields like name, phone number, and email address.
I've watched several tutorials, but they only cover extracting basic contact information. Can anyone provide guidance or recommendations on how to automate the extraction of these additional fields into the Google Sheet?
Any response is much appreciated. Thank you in advance guys 🙏
Here are the top stories from AI Today -
I'm reaching out to gather insights on a new embedded spectroscopy module that my startup is developing. Learn more about it here: <agrsensors.com/spectre-mini>
We initially built the device for detecting crop diseases early with support from the U.S. National Science Foundation and National Institute of Standards. It surprised us by outperforming standard machine vision accuracy by 5X with 1500X faster AI model training time. A number of unique features also arose from easing integration into our own systems, such as embedded optical calibrations and robust connectivity options.
This seems to resonate with others who are solving similar quality and process control problems, so we're eager to hear from any industrial automation professionals who are interested in this technology. What features stand out to you? What improvements would you suggest? And importantly, what value does this hold for you?
All the latest AI News -
Hi, I need to extract tables from pdfs with a variety of different formats from a public database for a research I'm helping with. I've tried using tabula and some other libraries to convert the pdfs into text, but tabula seems to combine the tables sometimes and produce strange results with many unknown fields.
I'm thinking of using clustering to classify the main few formats (I'd say there's <10 maybe), and then manually coming up with scripts to extract the fields of interest from the popular formats. Is this a feasible solution?
Alternatively, the PhD I'm working under said we could always just use gpt-4 API and call it a day, but I'm looking into other solutions out of my own interest
Where do you save the ChatGPT prompts that you use frequently? Notes app?
A practical application of ChatGPT is to edit existing text to improve its quality, correct any grammar errors, or tailor it for a specific audience. Whether you are a marketer, founder, or business professional, we often find ourselves copying and pasting text into ChatGPT and then transferring the prompt from our notes or another source to paste the result back into our work.
I created a desktop application that allows you to access your most frequently used GPT prompts with a single click or keyboard shortcut while using any application or browser.
Take a look here:
https://www.topbar.ai
Hey r/Automate,
I’m crafting an autonomous AI agent system at relix.ai, with a focus on innovative LLM applications. I’m curious to gather community insights on interaction design.
How do you think such a system should be designed for optimal interaction? Considering elements like command input, feedback mechanisms, and user experience, what are the key factors to ensure the system is intuitive, effective, and secure?
Looking forward to your thoughts on creating a seamless user-AI interface!
Hi, we’ve built a graph-based workflow builder to automate tasks in the browser. Our DSL lets you orchestrate LLM-powered web actions with loops, conditionals and memory.
The problem with existing web agents is that zero-shot planning is unreliable for long workflows. That’s why we’ve built a framework that let’s you define a graph-based workflow for web agents. Every action is modeled as a node and the next action is defined as an edge.
There are two different node type categories: Low-level browser interaction nodes and high-level reasoning nodes. Low-level nodes are actions like clicking, inputing text or navigating to a new URL. High-level nodes are able to extract structured data from a webpage or make a conditional decision based on the content of the webpage.
All of the nodes can be configured with a prompt in natural language. For instance, if you had a list of fire department websites you could define a click-node with a prompt “Click on the contact page”. This ensures that workflows generalize to different websites independent of the layout.
Our early users use us to automate tasks like:
We just released our chrome extension where you can build & run workflows for yourself. Its free for up to 200 browser actions a month. We’d love for you to try it and give us feedback. Here’s the link to the extension: https://chromewebstore.google.com/detail/cloudcruise/idegaiedponalbmkcpmmeimfjpfkhopn
Last but not least, here are some of the strange things we’ve encountered so far whilst automating the web:
Looking for review
Hello everyone, thanks for taking your time to look the post. I have been looking around the internet to see if this idea is possible and how I might even accomplish it if possible.
I would like to use a robotic arm to sort lego for me, there are programs and websites that are able to identify single pieces, the challenge would be identifying 100s of pieces and the robot placing it into different bins.
Do you think this would be possible?
I've been thinking about the potential of integrating machine learning into vehicle maintenance systems to automate the detection of component failures. Imagine a scenario where sensors throughout your car continuously monitor its various components, and if any critical part fails, the system detects it and alerts you or initiates necessary actions automatically.
I'm curious about the community's thoughts on this concept. Do you think leveraging machine learning in this way could significantly improve vehicle safety and maintenance efficiency? What challenges do you foresee in implementing such systems, and how could they be addressed?
Additionally, I'm interested in any real-world examples or experiences you may have with similar technologies or approaches. Have you come across any vehicles or systems that utilize machine learning for failure detection, and if so, what has been your experience with them?
Hi there, I am trying to solve an issue at my work which is going through the process of upgrading our digital filing system, which is currently a database type set up in File Explorer to an ECM system.We have been asked to tidy up and name thousands of data files spanning decades. Naturally, I am trying to automate a solution for my team as the time this would take is unachievable.
I have already been able to find a solution using Exiftool to add custom 'keywords' or tags for all the jpgs using a script that scans a parent folder and then all the sub-folders, and then assigns a common tag relating to that parent folder, in this case a site code. This is so that when you search for a site in the new ECM, any file with that site code will show up by using the associated metadata tag.
I am hoping to do the same for the hundreds of .mp4 and .mov files. I understand that Exiftool is limited when writing metadata to video files, and I am struggling to find the best solution. Any advice would be much appreciated!