/r/TechSEO
Welcome to Tech SEO, A SubReddit that is dedicated to the tech nerd side of SEO.
Welcome to Tech SEO, A SubReddit that is dedicated to the tech nerd side of SEO.
Wednesday, March 28th, 2018 - John Mueller of Google
/r/TechSEO
I feel like there have been A TON of AI x SEO related posts here over the last year, with so much interesting content.
As we're nearing the end of the year, I thought it'd be pretty awesome to conduct a quick survey on how all of our use of AI in SEO has evolved over the last year + how it's trending heading into 2025.
Link to the survey below. I'd love all of your takes. I'll send the raw data to everyone who participates, if you're interested.
Our startup also gave us a budget of $100 to reward respondents, so will pick a random responder to give a $100 Amazon gift card to.
In an e-commerce context with many PDPs, how do you handle pagination? Self canonical on all pages? Or on the first one? Why?
Thanks in advance
Hey everyone,
We just received a message from Google Search Console about "Deceptive pages" on our site tailoredcv.ai. The description states:
"These pages attempt to trick users into doing something dangerous, such as installing unwanted software or revealing personal information."
However, Google didn’t provide any sample URLs, which makes it really hard to investigate. Yesterday, we rolled out some new pages, such as this one, and after Google's recrawl, this flag popped up.
Our site doesn’t host malware or deceptive content, and we’re unsure what could have triggered this.
Does anyone have experience with this issue? How can we pinpoint what’s causing the flag if no sample URLs are provided? What steps should we take to resolve this with Google?
Any guidance or advice would be greatly appreciated!
Thanks in advance! 🙏
I started a new crawl for a website and noticed that 549 links are returning a "no response." However, when I navigate to the URL in my browser, the webpage appears fine.
I have two questions:
Additionally, how would you handle this situation if you were in my position?
Thanks a lot!
Hey everyone, this is my first post here as a developer and not a SEO expert.. This one is stumping me.
I’m proud of a site I’ve been working on, but I’m running into some weirdness with Google’s Core Web Vitals Assessment and was wondering if anyone else has seen something like this.
The site fails the assessment because Cumulative Layout Shift (CLS) is rated at 0.53, which doesn’t seem right. But when I scroll down the report, it shows a CLS score of 0.001, which is much closer to what I’d expect based on the site’s performance.
For reference, here’s the site: https://craftroulette.live
And here’s the full PageSpeed report: https://pagespeed.web.dev/analysis/https-craftroulette-live/peppcl2xf6?form_factor=mobile
Has anyone run into conflicting numbers like this before? Could it be an issue with how Google measures field vs. lab data?
I’m not losing sleep over it, but it’s a little baffling—and honestly, it feels like a matter of honor to get this figured out!
When using FAQ Schema for questions that have longer answers; answers that are formatted to have two paragraphs for example, does this need to be reflected in the Schema too? As in, when writing the answer into the code, do you have to match the paragraph indent, or can you just put the answer as one block?
Does that make sense? Thanks.
Some SEO tools allow custom data extraction with xpath or regex. Are you using this feature? I'm curious of what kind of data you extract and how you use it.
Why my does the robots.txt file sometimes show as "Fetched" successfully while at other times it shows a "Not Fetched - Redirect error" in Google Search Console
Hi everyone,
I'm working on a major ecommerce site. Recently, all of the URLs on the SERP have changed to a variation that includes an ?srsltid parameter. I have noticed the same for a few competitors too.
Why does that happen? Are there any SEO implications?
I Want to note that the site is mostly canonicalized (Magento based site with no extension, so non-product and category pages are not canonicalized )
How to proceed, in your opinion?
Hey all,
I’m working on a tool idea aimed at automating a common but time-consuming SEO strategy: using expired domains to build authority and drive link equity. Would love to get some feedback from the pros here to see if this is something you'd find useful or would consider using.
Here's the concept:
The tool would streamline the whole process of taking over an expired domain, building it into an SEO asset, and linking naturally to a target site over time. It would handle everything, from content creation to link placement and ongoing updates, so the site maintains authority without the usual maintenance. The idea is to save hours of work and budget while delivering the same link-building benefits.
I’m curious:
Any feedback—whether on the concept, potential use cases, or general challenges you’ve faced with expired domains—would be super helpful. Thanks in advance for sharing your thoughts!
Cheers,
Sam
Does It make any sense to canonize a Page generated by a filter( without any valute) and adding noindex and no follow as well?
I have a custom PHP web app at the root of my domain that is going a great job for SEO and Traffic etc.
I also want a blog - and I decided on WordPress and placed it within a subdirectory - and, well - all good. Many blog posts are indexed and all seems well.
My question is to just make sure that I am "ok" doing what I am doing, in other words, would having a WP installation confuse a crawler? For example, if a crawler goes into the blog and then sees a different menu (with a different HTML structure) then is all well or is this not recommended?
I am inclined to think - no. GoogleBot is smart enough to crawl URLs ONLY and parse TEXT (i.e. "content") that it can then render.
Am I overworrying or am I restricting the growth opportunities of my site by having WP as a blog within the subdirectory?
Thanks!
I was looking into the GSC crawling data recently and realized we have dozens of subdomains that are getting crawled very frequently. Most of them have no robots.txt files and thousands of useless pages that should not be accessible. Some have crawling frequency in the millions per day, others have very high download sizes per crawl, significantly higher than that of the main domain.
I'm going to add robots.txt for the biggest offenders but I'm also wondering if this is going to have an actual impact on the main domain as Google claims it considers them separate entities. Also, the main domain has only a few thousand URLs so crawling budget should not be a worry.
From what I can see this type of Google Knowledge Graph rich snippet (which appears to be an aggregated list based on other lists online - there aren't links to all websites of companies named in the list from each item listed in snippet).
Thanks!
We have some clean up to do on some pages with a good number of external backlinks. This group of pages 404, don't get much traffic, have good backlinks/SEO juice but are NOindex.
I assume to realize that link juice we need to remove the NOindex, then 301 them to relevant pages, right?
Any other details to consider?
A semrush audit raised that warning for many of my pages.
Basically I have many internal links which just go to an api which logs the click then redirects the user to an external site. You can think of them as affiliate links. I don't have a nofollow, since they ultimately go elsewhere, and I don't want crawlers to hit them and thus mess up my stats.
Is it safe to ignore this semrush warning, or is there a better way (ie SEO correct way) to mark these links but so that crawlers don't follow them?
Thanks
Today, I got an email from SemRush that it found a gazillion of nofollow internal links but not on all pages.
And I have no idea how it happened.
example URL
https://ppcpanos.com/about-google-ads-oci/
The site is SSG, SSR, (sveltekit) and is hosted on Cloudflare pages.
Any idea as to how to debug it would be highly appreciated.
I used a random "check broken links" website to scan my website and it gave me almost 200 links with the errors like the title. Is there an easy/ better way to further analyse and remove these?
Hi all,
I'm wondering what kind of work do tech seo freelancers perform monthly for a long-herm client, so not one-off projects (migrations, audits...) and where you exclusively do tech SEO - meaning you're not a regular SEO manager that also builds backlinks, does content writing/planning/briefs etc.
With my clients, I'm a "fractional technical SEO", meaning I help in-house teams that don't need full-time tech SEO or the client's SEO team doesn't have the time to work on it.
As a part of their team, i have access like an employee (jira, slack, github, AWS, guest on Teams etc.) where I, amongst other tasks:
I'm also interested about the pricing - do you get paid by hour or work on a retainer?
Interested in your approach - described above is what came naturally and where I feel like i can bring the most value as opposed to being a consultant which just does calls and doesn't do any actual work or has any responsibility...
Hey Everyone,
I’m looking for recommendations on any reliable AI tools or software that can analyse the headers, body text, and keywords of my content, give it a comprehensive SEO score, and suggest or even automatically implement improvements to boost optimization. I know of tools like Frase and Surfer SEO but haven’t checked them out yet.
My main focus is finding an AI tool that can rewrite existing content (headers / body texts) to improve SEO optimization and achieve a higher score. Any go-to sites, apps, or software you’d recommend for this?
Thanks in advance
Does SASS exist with multi grid GSC widgets in a dashboard?
I already built something like this for my self but it's not exactly simple and I'm looking for more fine polished solutions that are doing this already?
Fairly good size e-commerce website and I've taken it to a really good place for page speed on desktop!
Just somehow need to improve mobile but I'm lost on where to start.
Hi guys
I noticed in the last few days ( I have never checked in before) that the Google search console is not showing all the data he's explaining about and I would like to know if there's something I'm missing
So when I Look from above I can see that the total clicks on GSC is 6.59K
when I'm taking all the clicks and sum them up, the number is not exact (it's not even close to 6.59K)
can anyone explain what I'm missing
Thank you
Hello, how are you?
Im just starting to work with SEO, I'm a young 22M.
Itt's my first job and it's a great opportunity at a well-known company.
That said, I want to specialize in SEO to have a certain "stability".
Do you recommend a study path in the area of technical SEO? any additional knowledge? I currently have basic knowledge of HTML, CSS and Python, but it's not very useful to me, at least at the moment.
Do you have any recommendations for a young person just starting out?
Hello everyone, My customer wants to track clicks on a slider on the homepage. He added a cmpcode to the URL, i was wondering, are we wasting link juice?
Thank you!
I have watched all his content and I enjoy it and have learned a lot. Who else follows him?