Beep. Boop. Beep. Plants?
- Cyber Catamounts
- Sep 7, 2020
- 2 min read
Briefing:
The team has discovered a suspicious website that was linked on the Panther Creek website*-supposedly selling house plants? (To be honest, the Poplantis is pretty cool though, if I say so myself). This might be the next step in uncovering the C0Ff33L0verS hacker team’s plans, as we believe there’s something preventing our reconnaissance robots from crawling on their website. Give it a go!
*For legal reasons, this website is not on the PC website, so don’t search for it.
This was actually a challenge created by CyberFastTrack, so thank you to them for the learning resource!
Tip: What tells “search engine crawlers which pages or files the crawler can or can't request from your site?”
TL;DR Remove /index.html from the url and replace with /robots.txt. Find the secret directory (/secret-product.html) and append that to the base url.

Nice website, right?
If you click around, you’ll find that you can’t really do anything on it. In other words, there aren’t any vulnerabilities on the actual website. Maybe the URL?
The big giveaway on this challenge is “robots from crawling on our website.”
Googling this gives us results about a supposed file named robots.txt
Google itself gives you the answer:
Robots.txt
/robots.txt, or robots exclusion standard, or robots exclusion protocol is on a website to tell web crawlers/robots where they can’t crawl. Basically, when you Google something, the googlebot crawls webpages to find what you need. By adding a directory to robots.txt, you communicate to googlebot to NOT crawl those pages.
Actually solving the challenge:
User-Agent: *
Allow: /index.html
Allow: /product-*.html
Allow: /cart.html
Disallow: /secret-product.html
So, it looks like we’re allowed those three pages except for the secret product one!

I guess the secret product’s not so secret anymore…
Comments