/r/datacleaning
Data scientists can spend up to 80 percent of their time correcting data errors before extracting value from the data.
We at /r/datacleaning are interested in data cleaning as a preprocessing step to data mining. This subreddit is focused on advances in data cleaning research, data cleaning algorithms, and data cleaning tools. Related topics that we are interested in include: databases, statistics, machine learning, data mining, AI, visualization, etc.
Garbage in, garbage out! Data scientists can spend up to 80 percent of their time correcting data errors before extracting value from the data.
We at /r/datacleaning are interested in data cleaning as a preprocessing step to data mining. This subreddit is focused on advances in data cleaning research, data cleaning algorithms, and data cleaning tools. Related topics that we are interested in include: databases, statistics, machine learning, data mining, AI, information theory, information retrieval, pattern recognition, NLP, data visualization, etc.
Related subreddits :
/r/datacleaning
Hey everyone!
For anyone working with data regularly, you know that data cleansing and formatting isn’t just about making things look nice. It has a huge strategic impact, and I came across a blog that dives into this topic in detail. Here are some key insights that really stood out:
Improved Decision-Making: Clean data reduces errors and gives a reliable basis for making better decisions.
Enhanced Operational Efficiency: Consistent data formats make it easier for teams to collaborate and automate processes.
Maximized ROI on Data Investments: Regular cleaning and formatting, organizations can maximize the ROI on their data investments.
The blog makes a solid case for treating data cleansing as an investment that boosts performance, not just an extra step in data management. If you're interested in learning more, here’s the full post: Beyond Aesthetics: The Strategic Value of Data Cleansing and Formatting
What role does data cleansing play in your work? Do you see it as essential, or just an extra task? Let’s discuss!
Hi guys! Urgent need a mentor who can give me tasks from Data cleaning to visualization. I never studied data analytics formely, just studied from YouTube. Need help, I am counting on this reddit community.
I don't know if this is the right place for this but I need help cleaning this old dictionary, it is the only dictionary my native language has as of now. I want to make an app from it.
I discovered this pdf from an internet Archive as I had been looking for it for a while. This seems to be a digitized version of the physical copy.
The text can be copied but one letter doesn't copy properly, it is mistaken for other letters like V and U, which is the Ʋ letter I have pointed an arrow to. These days that letter is written with a Ŵ.
The dictionary goes from Tumbuka to Tonga to English and then flips at some point to go from English to Tonga to Tumbuka.
I only want the Tumbuka to English pairs and vice-versa ignoring the Tonga so I make a mobile app more easily.
Here is a link to the dictionary
Hi all,
It’s time for us to give back to the Reddit communities we love so much.
Normally when creating an account on Listcleaner.net you get 100 free cleaning credits to try our email cleaning service.
Right now we want to give 25 users of the r/datacleaning subreddit not 100, but 1000 credits to clean your email data, when creating an account.
You DO NOT have to buy anything, and the only contact information required to create your account on Listcleaner.net is your email address.
After creating an account, please tell us via DM your Listcleaner accounts username or email address and we will add the credits to your account.
The credits can be used on our website and via our API.
Happy email cleaning!
The Listcleaner.net team
Hey everyone,
I recently came across an insightful blog on strategies for improving data quality through data cleansing, and I thought it would be useful to share here.
The blog breaks down several key methods to enhance data quality, such as:
Handling Missing Data: Techniques for identifying and addressing gaps in datasets.
Standardizing Data Formats: Ensuring consistency across datasets for easier analysis.
Removing Duplicates: Avoiding redundancy and improving dataset efficiency.
Validating Data: Verifying the accuracy of data to ensure reliable outcomes.
These strategies are super helpful for anyone looking to streamline their data cleansing process and make sure their datasets are in top shape. If you're interested in diving deeper into these techniques, you can check out the full blog here: Strategies for Improving Data Quality Through Data Cleansing.
What are some of your go-to methods for improving data quality? Let’s discuss!
Hey everyone, I recently came across an insightful article on the importance of data cleansing in building effective predictive models. As we all know, the quality of data is critical for accurate predictions, but this blog dives deeper into how data cleansing lays the foundation for success in predictive analytics.
The article discusses:
Why messy data can lead to inaccurate predictions
Key steps involved in data cleansing, including deduplication, dealing with missing values, and correcting inconsistencies
The role of data quality in the entire lifecycle of a predictive model
Best practices to improve the accuracy and reliability of your predictive models by focusing on clean data
It’s a great read for anyone looking to improve their predictive modeling workflows. If you’re interested, check it out here.
Let’s discuss: How do you handle data cleansing in your projects? What tools or techniques do you use to ensure high data quality?
when working with dirty data, what data issues have you run into the most? what's important to look out for? do your tools look out for these things or do you have to manually build out these checks?
I've been trying to find datasets to practice my cleaning skills and I find datasets already clean. Also if there's a way to find datasets to clean above a million rows that'll be so helpful!
Hi Guys,
Let's keep it short,
I want to learn data cleaning using Power Query/Power Bi and Pandas (Python)
But the problem is that I've no mentor or someone who can check my cleaned and processed data. Like I don't even know if I am cleaning the data appropriately or not.
Please tell me guys how this subreddit can be helpful.
Please help. I'm desperate for help!
https://bitgrit.net/competition/22
The challenge tasks solvers to leverage their expertise to develop a classification model that can accurately discriminate between the breath of COVID-positive and COVID-negative individuals, using existing data. The ultimate goal is to improve the accuracy of the NASA E-Nose device as a potential clinical tool that would provide diagnostic results based on the molecular composition of human breath
I have a table in Excel filled with typos. For example: Row1: obi LLC, US, SC, 29418, Charlestone, id5 Row2: obi company, US, SC, 29418, Charlestone, id4 Row3: obi gmbh, US, SC, 29418, Charlestone, id3 Row4: obi, US, SC, 29418, Charlestone, id2 Row5: Obi LLC, US, SC, 59418, Charlestone, id1 Row6: Starbucks, US, SC, 1111, Budapest, id9 Row7: Starbucks kft, HU, BP, 1111, Budapest, id8 Row8: Starbucks, HU, BP, 1111, Budapest, id7
The correct rows here are row1 and row8 because their values occur most frequently in the table. I want to create a new table with only the correct directions. The expectation is to assign the standardized value to each row based on its relationship. It's important to consider not only the name but also the name/country/state/zip code/city combination. Fuzzy matching wouldn't work, because I don't have a list with the correct data. I initially tried using VBA, but I only managed to list the one row that occurred most frequently (in this case row 1). I can copy my code if necessary. Have you ever cleaned such messy data? What would you recommend? Thank you for your advice
Hey guys, I think this might be very relevant in this sub. Lately, I was working on a tool to clean any textual data. In a nutshell it can convert inconsistent data like this (see all names are different and hard to analyse):
Into something like this:
I'm actively looking for feedback and whether this meets someones needs / needs to be changed for your specific case. Please let me know what you think!
I have a column named ' informations ' and it has the information of used cars, and this column has an attribute and her value seperated by a comma ( , ) but in the same cell i have multiple attribute and the values like this one :
,Puissance fiscale,4,Boîte de vitesse,Manuelle,Carburant,Essence,Année,2013,Kilométrage,120000,Model,I20,Couleur,bleu,Marque de voiture,Hyundai,Cylindrée,1.2
as you can that is a single cell ine the 1st line in the column named informations
Puissance fiscale has 4 as a value
boite de vitesse has manuelle as a value
ETC
NB: i have around 9000 line and not everyline have the same structure as this
In today's data-driven world, where data breaches are a constant threat, safeguarding your organization's sensitive information is paramount. Learn how to implement robust data classification processes and explore top tools for securing your data from our blog.
Explore now: https://www.infovision.com/blog/decoding-data-classification-simplified-yet-comprehensive-handbook
ORDER QUANTITY | UNIT SELLING PRICE| TOTAL COST
0 | 151.47 | -86.9076
0 | 690.89 | -1002.1401
0 | 822.75 | -978.8337
I am trying to clean a dataset and wanted to understand if it makes sense or if I should delete it from the table. There are about 28% of total entries with such data. It won't make sense to delete 28% either. Please drop your suggestions and understanding.
Hello,
I'm currently exploring options for professional data cleaning and analysis services, particularly those utilizing Databricks and PySpark expertise. I have a dataset that requires thorough cleaning to address inconsistencies and erroneous data, followed by in-depth analysis to extract valuable insights for my business.
Here's a breakdown of the tasks I'm looking to outsource:
I understand that the cost of such services can vary depending on factors such as the complexity of the dataset, the volume of data, and the specific requirements of the analysis. However, I would appreciate any ballpark estimates or insights from forum members who have experience with similar projects.
Additionally, if you have recommendations for reputable service providers or consultants specializing in data cleaning and analysis with Databricks and PySpark, please feel free to share them.
Thank you in advance for your assistance!
Hello! I have a collection of OCR text from about a million journal articles and would appreciate any input on how I can best clean it.
First, a bit about the format of the data: each article is stored as an array of strings where each string is the OCR output for each page of the article. The goal is to have a single large string for each article, but before concatenating the strings in these arrays, some cleaning needs to be done at the start and end of each string. Because we're talking about raw OCR output, and many journals have things like journal titles, page numbers, article titles, author names, etc. at the top and/or bottom of each page, and those have to be removed first.
The real problem, however, is that there is just so much variation in how journals do this. For example, some alternate between journal title and article tile at the top of each page with page numbers at the bottom, some alternate between page numbers being at the top and the bottom of each page, and the list goes on. (So far, I've identified 10 different patterns just from examining 20 arrays.) This is further complicated by most articles having different first and sometimes last pages, tables and captions, etc.
At this point, I could keep going to identify patterns, write some regex to detect what pattern is present, then clean accordingly. But I also wonder if there's a more general approach, like searching for some kind of regularity, either across pages or (more commonly) every other page, but I'm not quite sure how I should approach this task.
Any suggestions would be greatly appreciated!
Hi,
Just wondering what requirements or checklist items people would suggest for a definition of Clean Data ready to be used in machine learning? Akin to "tidy data", but for modelling. I.e.
etc
I know this will likely be opinionated, hence wanting to "crowd source" it 😃
Feel free to disagree with any statements, as I imagine there will be differences
Hey everyone,
I'm a sophomore studying data science and I've been digging into ways to earn money online. I stumbled upon the idea of freelancing my data cleaning skills, and it seems like an exciting avenue. Though I'm still learning, I'm a quick learner and confident that I can get proficient in data cleaning soon.
I'm keen to get hands-on experience and was wondering if anyone would be open to taking me under their wing as an apprentice or offering advice on where to begin.
While I'm still early in my studies, I've worked on a few exploratory data analyses for my classes. These involved cleaning data and using RStudio to create graphs.
I'm eager to turn this interest into a reality. Any guidance or tips on how to kickstart a career in freelancing data cleaning would be hugely appreciated!
Thanks in advance for any help or advice you can offer!
Hello everyone,
I am trying to clean up some data from our ERP systems regarding our items. I am working for a furniture company, we do have different characteristics that compose a product (size/timber/fabric and so on). So far, those characteristics has been input all in one description field. I'd like to extract those information and assign it to the new correct field (one field per characteristic). Maybe some AI tools might be able to help in that process? I am not a developer / technical IT.
Disclaimer: This is a personal project I did, made possible with RPA (UiPath web scraping). The stats come from SA Rugby website & I developed automation flows to get the stats, player bio & profile pictures from the same website. I used PowerQuery to transform the output & to debug issues & finally Tableau for visualisation. I highly recommend getting comfortable with Power Query, you can do so much with it!
Hi everyone, I'd like to share a personal project I did about the Springboks RWC Campaign. I'd love to get your feedback as PowerBI people, to get your unique perspective. We only use Tableau at so I thought I'd overcome confirmation bias by getting your guys' opinions.
The project is basically match stats for all the games the Springboks played in all championships in 2023. You can see those who are consistently performing well. The stats come from SA Rugby
Each match has highlight reels of the players' game contributions (71 total). The project also covers all the matches that the Boks under Rassie have played NZ (5 Wins, 5 Losses & 1 Draw).
Ultimately, the project shows how tough this World Cup was & the pressure the team faced, especially in the knockout phases.
PS. I think this would be great for those new to rugby, since it covers the biggest matches in the sport with highlight reels to see the entertaining stuff.
You can check out the full work here: https://public.tableau.com/views/Springboks2023RugbyWorldCupCampaign/TheSpringboks2023Campaign?:language=en-US&:display_count=n&:origin=viz_share_link
Hello everyone,
I am currently working on a call center trend dashboard project, and I've encountered an issue with multiple blank cells in the data. I'm unsure about the best approach to handle this. Should I delete rows with multiple blank cells, or should I use statistics to fill these blank cells?
I would greatly appreciate your guidance and suggestions on this matter. Your assistance would be invaluable. Thank you in advance!
Project Task :
Create a dashboard in Power BI for Claire that reflects all relevant Key Performance Indicators (KPIs) and metrics in the dataset
Possible KPIs include (to get you started, but not limited to):
Some info about data:
Total rows-5000
Total column :10
"Total rows having missing values: 946 Each of the 946 rows has 3 blank/missing cells.
Please guide me on the approach I should take to clean this data.
Note: The blank column is just a temporary column used to check how many cells are blank in each row."
TL;DR:Seeking advice on handling data with many missing values (946 rows, 3 blank cells each) for a call center trend dashboard project. Also, tasked with creating a Power BI dashboard for Claire, highlighting KPIs and metrics. Please assist. Thanks!
I am upskilling in the field of data science. Recently started practicing on Kaggle datasets. Picked up a dataset which have more categorical columns than numerical and these columns have more that 5% (upto 60% null values in some columns) null values. I am confused about what technique to use on them. Cannot find resources where handling object columns specifically is focused upon. Any help please? can anyone suggest a book or website or just tell me how to proceed with this?
If you're embarking on the odyssey of studying Python data analysis, commence by acquiring a mastery of the rudiments of Python programming.
Once you've attained a level of proficiency with Python, plunge into the depths of indispensable libraries such as NumPy for numerical computation and Pandas for data manipulation. Engage in practical exercises utilizing authentic datasets to accrue experiential knowledge, and refine your prowess in data visualization employing Matplotlib and Seaborn.
Delve into the realm of statistical analysis using the comprehensive tools provided by SciPy, and contemplate augmenting your skill set with other pertinent libraries such as scikit-learn for machine learning. Engross yourself in online communities, undertake ambitious projects, and perpetually pursue learning and diligent practice to ascend to a zenith of expertise in Python data analysis—a gratifying pursuit that unveils the portals to unearthing invaluable insights from data.
To get you started, I will highly recommend you look at these articles.
Exploratory Data Analysis and visualization practical example:
https://link.medium.com/FYuBpTyvCAb
Data cleaning with python (a practical example)
https://link.medium.com/GBsdtEFvCAb
How to make data Visualization in python
https://link.medium.com/6rWH2nKvCAb
Python data cleaning made easy
https://link.medium.com/6rWH2nKvCAb
Sales Statistical analysis with python
https://link.medium.com/ZGx7NDRvCAb
https://link.medium.com/OidaOBUvCAb
Python Web App Development: Unleashing the Power of Simplicity and Flexibility
Enhancing Your Web Application with Python’s Data Analysis Tools
The Ultimate Python 3 Guide: Everything You Need to Know
https://medium.com/@mondoa/enhancing-a-comprehensive-python-3-tutorial-b8102f0cfcc4
I need help with figuring out the best tool to do so extraction of data. I work on a Wiki and I am able to download XMLs of large sets of pages. For this to be any use to us, I need to be able to put them in Excel to turn them into CSV files to be able to reupload them after I've fixed or added more data. Here's an example of what I can manually do right now to turn it into a format I need for the CSV file:
First I download the XML File. This example only has 3 pages in it, but usually there are hundreds. It looks something like this:
mediawiki xmlns="http://www.mediawiki.org/xml/export-0.11/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mediawiki.org/xml/export-0.11/ http://www.mediawiki.org/xml/export-0.11.xsd" version="0.11" xml:lang="en">
<siteinfo>
<sitename>FamilySearch Wiki</sitename>
<dbname>wiki_en</dbname>
Then I can manually go through and copy everything between xml:space="preserve"> and </text> to get three separate pages:
{{breadcrumb | link1=[[Mexico Genealogy|Mexico]]
| link2=[[Sinaloa, Mexico Genealogy|Sinaloa]]
| link3=
| link4=
| link5=[[Cosalá, Sinaloa, Mexico Genealogy|Cosalá]]
}}
Guide to '''Municipality of Cosalá family history and genealogy''': birth records, marriage records, death records, census records, parish registers, and military records.
==History==
*El territorio donde actualmente se ubica Cosalá, estuvo ocupado por pueblos prehispánicos que se asentaron principalmente en la rivera de los ríos, como lo fueron
los grupos indígenas Tepehuanes, Acaxees y Xiximes.
*El municipio de Cosalá fue fundado el 13 March 1562.
*El municipio de Cosalá tiene una población de aproximadamente 17.000 personas.<ref>Wikipedia contributors, “Municipio de Cosalá” in ''Wikipedia: the Free Encyclopedia'', https://es.wikipedia.org/wiki/Municipio\_de\_Cosal%C3%A1. accessed 25 February2021.</ref>
==Localities within Cosalá==
{| style="width:100%; vertical-align:top;"
|- |
<ul class="column-spacing-fullscreen" style="padding-right:5px;">
<li>Cosalá</li>
<li>El Rodeo</li>
<li>La Llama</li>
</ul>
|}
==Civil Registration==
*'''1867-1929''' {{FHL|2819510|title-id|disp=Mexico, Sinaloa, Cosalá, Civil Registration, 1867-1929}}(*) at FamilySearch Catalog — images
==Parish Records==
*'''1777-1966''' {{FHL|263768|title-id|disp=Iglesia Católica. Santa Ursula (Cosala, Sinaloa) Parish Records, 1777-1966}}(*) at FamilySearch Catalog — images
*'''1874-1920''' {{FHL|260349|title-id|disp=Iglesia Católica. Santa Ursula (Cosalá, Sinaloa) Parish Records, 1874-1920}}(*) at FamilySearch Catalog — images
==Census==
==Cemeteries==
*Cementerio de San Juan Cosala
:*[https://www.findagrave.com/cemetery-browse/Mexico/Sinaloa/Cosal%C3%A1-Municipality?id=county\_13453 Find a Grave]
==References==
<references/>
<br><br>
[[es:Cosalá, Sinaloa, Mexico Genealogy]]
[[Category:Sinaloa, Mexico]]
{{breadcrumb | link1=[[Mexico Genealogy|Mexico]]
| link2=[[Sinaloa, Mexico Genealogy|Sinaloa]]
| link3=
| link4=
| link5=[[Mocorito, Sinaloa, Mexico Genealogy|Mocorito]]
}}
Guide to '''Municipality of Mocorito family history and genealogy''': birth records, marriage records, death records, census records, parish registers, and military records.
==History==
*En el año de 1531 con la entrada del conquistador Nuño de Guzmán al noroeste mexicano y la fundación de la villa de San Miguel de Navito, se inició la delimitación geográfica de la provincia de Culiacán.
*En 1732 cuando la expansión española llegaba más allá del río Yaqui, se encuentra el territorio dividido en provincias.
*En 1830 se decreta la separación definitiva de Sonora y Sinaloa. El nuevo estado de Sinaloa se dividió en once distritos, siendo Mocorito uno de ellos.
*Mocorito fue erigido como municipio el 8 April 1915.
*El municipio de Mocorito tiene una población de aproximadamente 45.000 personas.<ref>Wikipedia contributors, “Municipio de Mocorito” in ''Wikipedia: the Free Encyclopedia'', https://es.wikipedia.org/wiki/Municipio\_de\_Mocorito. accessed 26 February2021.</ref>
==Localities within Mocorito==
{| style="width:100%; vertical-align:top;"
|- |
<ul class="column-spacing-fullscreen" style="padding-right:5px;">
<li>Pericos</li>
<li>Mocorito</li>
<li>Caimanero</li>
<li>Melchor Ocampo</li>
<li>Recoveco</li>
<li>Higuera de los Vega</li>
<li>Potrero de los Sánchez (Estación Techa)</li>
<li>Cerro Agudo</li>
<li>El Valle de Leyva Solano (El Valle)</li>
<li>Rancho Viejo</li>
</ul>
|}
==Civil Registration==
*'''1865-1929''' {{FHL|2819522|title-id|disp=Mexico, Sinaloa, Mocorito, Civil Registration, 1865-1929}}(*) at FamilySearch Catalog — images
*'''1922''' {{FHL|2819540|title-id|disp=Mexico, Sinaloa, Mocorito y Guasave, Civil Registration, 1922}}(*) at FamilySearch Catalog — images
==Parish Records==
*'''1677-1968''' {{FHL|262334|title-id|disp=Iglesia Católica. Purísima Concepción (Mocorito, Sinaloa) Parish Records, 1677-1968}}(*) at FamilySearch Catalog — images
*'''1856-1933''' {{FHL|589667|title-id|disp=Iglesia Católica. Nuestra Señora de las Angustias (Pericos, Sinaloa) Registros
parroquiales, 1856-1933}}(*) at FamilySearch Catalog — images
==Census==
*'''1930''' {{FHL|454789|title-id|disp=Censo de población del municipio de Mocorito, Sinaloa, 1930}}(*) at FamilySearch Catalog — images
==Cemeteries==
*Panteon Reforma
:*Address: Mocorito
*Cementerio de Buena Vista
:*Address: Mocorito
*Cementerio de El Queso
:*Address: Boca de Arroyo, Mocorito
==References==
<references/>
<br><br>
[[es:Mocorito, Sinaloa, Mexico Genealogy]]
[[Category:Sinaloa, Mexico]]
{{breadcrumb | link1=[[Mexico Genealogy|Mexico]]
| link2=[[Sinaloa, Mexico Genealogy|Sinaloa]]
| link3=
| link4=
| link5=[[Sinaloa, Sinaloa, Mexico Genealogy|Sinaloa]]
}}
Guide to '''Municipality of Sinaloa family history and genealogy''': birth records, marriage records, death records, census records, parish registers, and military records.
==History==
*Sinaloa de Leyva se fundó el 30 April 1583 con el nombre de Villa de San Phelipe y Santiago de Sinaloa.
*En 1732 La Villa es designada capital de la gobernación de Sinaloa.
*Sinaloa fue erigido como municipio el 25 March 1915.
*El municipio de Sinaloa tiene una población de aproximadamente 89.000 personas.<ref>Wikipedia contributors, “Municipio de Sinaloa” in ''Wikipedia: the Free Encyclopedia'', https://es.wikipedia.org/wiki/Municipio\_de\_Sinaloa. accessed 26 February2021.</ref>
==Localities within Sinaloa==
{| style="width:100%; vertical-align:top;"
|- |
<ul class="column-spacing-fullscreen" style="padding-right:5px;">
<li>Estación Naranjo</li>
<li>Sinaloa de Leyva</li>
<li>Genaro Estrada</li>
<li>Gabriel Leyva Velázquez</li>
<li>Ruiz Cortines Número Tres</li>
<li>Alfonso G. Calderón Velarde</li>
<li>Cubiri de Portelas</li>
<li>Ejido el Maquipo</li>
<li>Llano Grande 1,540</li>
<li>Santiago de Ocoroni</li>
</ul>
|}
==Civil Registration==
*'''1861-1929''' {{FHL|2819523|title-id|disp=Mexico, Sinaloa, Sinaloa, Civil Registration, 1861-1929}}(*) at FamilySearch Catalog — images
==Parish Records==
*'''1852-1968''' {{FHL|263710|title-id|disp=Iglesia Católica. San Felipe y Santiago (Sinaloa, Sinaloa) Parish Records, 1852-1968}}(*) at FamilySearch Catalog — images
==Census==
*'''1930''' {{FHL|454801|title-id|disp=Censo de población del municipio de Sinaloa, Sinaloa, 1930}}(*) at FamilySearch Catalog — images
==Cemeteries==
*Panteón Municipal de Estación Naranjo Sinaloa Jesus Parra Gerardo
:*Address: Francisco Villa #0, Estación Naranjo, Sinaloa
*Cementerio Municipal
:*Address: Sinaloa Guasave, Sinaloa de Leyva, Sinaloa
*Panteón Municipal
:*Address: Isauro Vallejo #0, Tierra Blanca, Sinaloa
:*[https://www.findagrave.com/cemetery-browse/Mexico/Sinaloa/Sinaloa-Municipality?id=county\_13465 Find a Grave]
==References==
<references/>
<br><br>
[[es:Sinaloa, Sinaloa, Mexico Genealogy]]
[[Category:Sinaloa, Mexico]]
Does anyone know an efficient way to get some sort of computer to do this? I tried having ChatGPT help me write functions for Google Sheets, but they weren't working very well for me. I also tried using regular expressions, but could still only get it to do one page at a time, and still had to manually do a lot of the work, which isn't feasible when there are 300+ pages to go through. I'm happy to try to learn something new in order to do this as it would help speed up some of our processes. I am sure something like this exists, but I don't know what. Thanks for your help!
As in the picture, there are multiple records with same headers, i want to create data which has column headers and it's values below them. I am unable to find a way out. Please help!!!