Our Portfolio
Discover our innovative projects spanning AI, ML, LLMs, automation, and more
Google Scholar Research Collaboration Platform
AI & MLA Django web application that helps researchers discover relevant scholarly work by indexing the Google Scholar feed and surfacing personalized results. NLP techniques extract key information from abstracts and keywords so that searches return contextual, not just lexical, matches. Outcome: Gave researchers a faster way to find relevant papers and collaborators, with personalized ranking based on their interests.
Key Features
- Web app for indexing and searching scholarly articles
- Google Scholar data integration
- NLP-based keyword and abstract extraction
- Personalized result ranking per user
- Tech Stack:** Python, Django, NLTK
Urdu Image Caption Generation
AI & MLAn attention-based LSTM model that generates Urdu-language captions for input images. The project required building a custom dataset of images paired with native Urdu captions, then training and deploying the model as an online service. Outcome: First-of-its-kind plug-and-play Urdu captioning model, available online for direct use.
Key Features
- Attention-based LSTM architecture
- Custom Urdu image-caption dataset
- Trained and deployed for online inference
- Tech Stack:** Python, TensorFlow
LoRaDRL — Deep RL for LoRaWAN Parameter Selection
AI & MLA deep reinforcement learning algorithm (DDQN) that selects PHY-layer transmission parameters in LoRaWAN networks. Existing rule-based algorithms cause packet collisions in dense LPWAN deployments; this DRL approach reduces collisions and improves Packet Delivery Ratio by up to 500% in some scenarios. Published at IEEE LCN 2020 with a follow-up paper. Outcome: Demonstrated up to 500% PDR improvement over state-of-the-art LoRaWAN parameter selection, with peer-reviewed publications in IEEE venues.
Key Features
- Double Deep Q-Network (DDQN) algorithm
- Custom Python simulation environment for LoRaWAN
- Validated against state-of-the-art baselines
- Two peer-reviewed IEEE publications
- Tech Stack:** Python, TensorFlow
Adversarial ML in 5G Networks (Research)
AI & MLA research study on the adversarial risks of using AI/ML for 5G network automation. The work covers supervised, unsupervised, and reinforcement-learning attack surfaces through three case studies, proposes mitigation approaches, and offers guidelines for evaluating ML model robustness in 5G contexts. Published in IEEE Internet Computing 2021. Outcome: Peer-reviewed publication that provides the 5G research community with a structured view of adversarial ML risks and mitigation strategies.
Key Features
- Three case studies covering supervised, unsupervised, and RL attacks
- Custom Python environments for each adversarial scenario
- Mitigation guidelines and robustness evaluation framework
- Peer-reviewed publication in IEEE Internet Computing
- Tech Stack:** Python, TensorFlow
Deep RL Agents for Flappy Bird & Lunar Lander
AI & MLDQN and DDQN agents trained to play Flappy Bird and Lunar Lander. The agents achieve high scores and sustain long play sessions, demonstrating practical value-based deep reinforcement learning. Outcome: Working DQN/DDQN implementations that consistently outperform baselines on classic RL benchmarks.
Key Features
- DQN and DDQN agents for two distinct environments
- Sustained high-score performance across long episodes
- Reproducible training and evaluation setup
- Tech Stack:** Python, TensorFlow
CNN-Based Object Classification
AI & MLConvolutional Neural Network models that classify objects and digits into predefined categories and sub-categories. The trained models were further fine-tuned for additional object classes, producing strong cross-domain performance. Outcome: Reusable CNN classifiers that can be fine-tuned to new object classes with minimal data and effort.
Key Features
- CNN and DNN architectures for hierarchical classification
- Fine-tuning workflow for new object categories
- Validated accuracy across predefined classes
- Tech Stack:** Python, TensorFlow
Semantic News Matching with Knowledge Graphs
AI & MLA semantic matching pipeline that uses word embeddings to relate news articles across thousands of sources. A custom scraper assembled the news corpus, matches were stored in Neo4j, and an API surfaces related articles to the article a reader is currently viewing — visualised as a comparison graph. Outcome: Powered a "related news" experience that connects readers to relevant articles across the web in real time.
Key Features
- Custom scrapers across many news sources
- Word-embedding-based semantic matching
- Knowledge graph storage in Neo4j
- API for serving related articles in real time
- Tech Stack:** Python, TensorFlow, PostgreSQL
K-Means Fleet Order Assignment
AI & MLA K-Means-based assignment engine that distributes delivery orders from multiple vendors across a fleet of riders, encoding the client's business rules into the clustering and ranking logic. Replaced a team of human dispatchers and dramatically increased throughput. Outcome: Scaled daily order assignment from ~1,000 to ~4,000 orders per day while reducing the dispatch team's workload.
Key Features
- K-Means clustering with embedded business rules
- Multi-vendor, multi-rider assignment logic
- Production deployment on top of MongoDB
- 4× throughput improvement vs. manual dispatch
- Tech Stack:** Python, Scikit-learn, MongoDB
Graph Neural Network Fraud Detection
AI & MLA Graph Convolutional Network (GCN) for fraud detection trained on a public dataset (80/20 split). The model was attacked with adversarial samples to test robustness, then fine-tuned on those samples to harden it against future attacks — improving overall accuracy in the process. Outcome: Produced a fraud detection model that is both more accurate and noticeably more robust to adversarial input.
Key Features
- GCN architecture for graph-structured fraud data
- Adversarial attack evaluation
- Adversarial fine-tuning for hardened robustness
- Tech Stack:** Python, PyTorch
COVID-19 Detection from Chest X-rays
AI & MLCNN models trained to distinguish COVID-19 / Pneumonia chest X-rays from normal X-rays in a binary classification task. Multiple architectures and configurations were evaluated, with all code and findings published openly on GitHub. Outcome: Open-source reference implementation for COVID-19 X-ray detection, with full reproducible code and results.
Key Features
- Binary classifier for "Infected" vs "Normal" chest X-rays
- Multiple CNN configurations evaluated and compared
- Open-source code and results on GitHub
- Tech Stack:** Python, PyTorch
Notion Clone with AI & Live Collaboration
Full-Stack AppsA modern note-taking application inspired by Notion, with built-in AI features and real-time multi-user collaboration. Users can chat with their documents, generate automatic summaries, translate content on the fly, and co-edit alongside teammates in real time — all behind a secure authentication layer. Outcome: A production-ready Notion alternative that combines familiar note-taking ergonomics with modern AI document intelligence.
Key Features
- Chat-with-document Q&A powered by OpenAI
- Automatic document summarization
- Real-time machine-learning-based translation
- Live multi-user collaboration with Liveblocks
- Secure authentication and user management with Clerk
- Cloud-native AI tasks via Cloudflare Workers
- Tech Stack:** Next.js, React, Clerk, Liveblocks, Shadcn UI, OpenAI, Firestore, Cloudflare Workers
AI Car Recommendation Agent
LLMs, RAGs & AgentsAn AI-powered virtual sales advisor that ingests scraped data from multiple car-selling platforms and recommends the best car for a user's budget, preferences, and intended use. The system understands natural-language queries, filters across price, brand, mileage, and condition, and surfaces ranked alternatives when an exact match isn't available. Supports both English and Arabic queries, and is built to scale to additional marketplaces. Outcome: A scalable recommendation agent that turns free-text user preferences into actionable, ranked car suggestions across multiple marketplaces.
Key Features
- Natural-language query understanding with multi-parameter filtering
- Ranked recommendations with fallback alternatives
- Marketplace-ready ingestion from scraped sources
- Bilingual support (English and Arabic)
- Designed to scale across multiple car platforms
- Tech Stack:** Python, OpenAI GPT-4, LangChain, Selenium, BeautifulSoup, FastAPI, Docker, AWS, PostgreSQL
AI-Powered LinkedIn Branding Content
LLMs, RAGs & AgentsA series of LLM-assisted LinkedIn posts written to build the client's personal brand, establish thought leadership, and attract qualified leads. Posts are grounded in verifiable information and tuned to the client's voice across economics, data analytics, leadership, growth, and sustainable development. Outcome: Ready-to-publish, on-brand content tuned for LinkedIn engagement and lead generation.
Key Features
- Engaging hooks and clear takeaways in every post
- Industry statistics and insights integrated naturally
- Personal anecdotes and actionable advice for engagement
- Targeted hashtags for LinkedIn discoverability
- Mobile-readable structure and length
- Tech Stack:** Python, OpenAI GPT-4, LangChain
Medical Procedures RAG System
LLMs, RAGs & AgentsA Retrieval-Augmented Generation pipeline that interprets clinical questions and returns evidence-based procedure suggestions backed by citations from trusted medical sources. Built as decision support for healthcare professionals — not a replacement for clinical judgment. Outcome: A clinical decision-support tool that returns ranked procedure suggestions with citations and contraindication alerts. Intended to support, not replace, healthcare professional verification.
Key Features
- Medical query processor with intent classification
- Vector database of clinical guidelines and research
- Hybrid search RAG pipeline
- Secure API with safety filters and contraindication alerts
- Optional demo interface
- Tech Stack:** Python, LangChain, Llama / Mistral, BioBERT, FastAPI, Docker
Order Management REST API Suite
REST APIsA comprehensive Django REST Framework backend for an Order Management System, including order creation, editing, listing with advanced filtering, status management, and rider assignment. The system runs on MongoDB, queues work through Celery and RabbitMQ, and is deployed on Heroku with Slack and Sentry monitoring. Outcome: Reliable, observable order-management backend that handles end-to-end lifecycle from creation to fleet assignment.
Key Features
- Full REST API for orders, assignments, and status transitions
- Advanced filtering for multi-order listing endpoints
- JWT-based authentication
- Celery + RabbitMQ background queues
- Optimized MongoDB queries with scheduled maintenance scripts
- Slack and Sentry alerting; Dynoscale for auto-scaling
- Deployed on Heroku via GitHub CI/CD
- Tech Stack:** Python, Django, MongoDB, Celery, RabbitMQ, Redis
Fleet Management REST API Suite
REST APIsAs technical lead, built the REST API backend for a complete fleet management product covering a driver mobile app, driver portal, onboarding panel, and management portal. Includes order assignment, driver onboarding, blocking/unblocking, and live location tracking — all on MongoDB and Django REST Framework. Outcome: End-to-end fleet management backend used across mobile and web surfaces, with smooth scaling under live load.
Key Features
- APIs for orders, drivers, onboarding, and management
- Driver block/unblock and availability endpoints
- Real-time location tracking
- JWT authentication and per-role access controls
- Celery + RabbitMQ for background processing
- Slack and Sentry monitoring; Dynoscale for auto-scaling
- Deployed on Heroku via GitHub CI/CD
- Tech Stack:** Python, Django, MongoDB, Celery, RabbitMQ
AI Assistant Mobile App REST APIs
REST APIsA serverless backend on AWS Lambda for an AI assistant mobile app, with DynamoDB as the primary store. Covers signup/signin, password reset, conversation history, and uploads for image, audio, and video — including a live video socket implementation. S3 handles file storage and SES handles transactional email. Outcome: A scalable, serverless backend that powers a feature-rich AI mobile assistant with media upload and real-time video.
Key Features
- Full auth flow (signup, signin, forgot password) with JWT
- Conversation APIs with latest-conversation lookups
- Image, audio, and video upload endpoints
- Live video socket implementation
- AWS S3 for file storage and SES for transactional email
- Custom Python routing layer on AWS Lambda
- CloudFront + Route 53 for global delivery
- Tech Stack:** Python, AWS Lambda, DynamoDB, AWS S3, AWS SES, CloudFront
Marriott Hotel Price Checker
Web AutomationA Python script that checks Marriott hotel pricing for client-defined locations and dates on www.marriott.com, then emails the client an HTML-formatted price comparison table for nearby hotels. Locations are managed through a MongoDB-backed admin frontend purpose-built for the workflow. Outcome: Replaced manual price checks with a scheduled, email-delivered report covering every location the client cares about.
Key Features
- Scheduled scraping at configurable times of day
- HTML-formatted email reports with price comparison tables
- Custom MongoDB-backed admin frontend for location management
- Deployed on an Ubuntu remote desktop server
- Tech Stack:** Python, Selenium, MongoDB, Linux
Binance Top-100 Crypto Pairs Bot
Web AutomationA Python script that periodically pulls the top 100 USDT trading pairs and their pricing from Binance, feeding the list into a downstream trading bot that monitors price fluctuations. Outcome: Provided a continuously refreshed list of high-volume USDT pairs that powers the client's trading algorithm.
Key Features
- Scheduled extraction at regular intervals
- Continuously refreshed top-100 USDT pairs list
- Direct hand-off into the client's trading bot
- Deployed on an Ubuntu remote desktop server
- Tech Stack:** Python, Selenium, Linux
PDF Document Extraction Bot
Web AutomationA Python automation that searches a target website for companies, navigates to each company's document directory, and downloads every PDF. Files are hashed to detect duplicates, and NordVPN is used to rotate the bot's IP location regularly to avoid bot detection. Outcome: Hands-off PDF collection at scale, with deduplication and IP rotation that keep the pipeline reliable over long runs.
Key Features
- Per-company directory navigation and full PDF download
- Hash-based deduplication of downloaded files
- Regular IP rotation via NordVPN
- File delivery via Dropbox and Mega
- Deployed on an Ubuntu remote desktop server
- Tech Stack:** Python, Selenium, Linux
Amazon Price Comparison Scraper
Web ScrapingA Python-based scraper that monitors selected product categories on Amazon and compares prices against the client's other sources. The script runs every four hours, exports a clean comparison spreadsheet, and delivers it to the client by email or shared cloud storage. Outcome: Replaced manual price tracking with a hands-off pipeline that produces fresh, decision-ready price reports six times a day.
Key Features
- Scheduled execution every four hours with full logging
- Structured output in JSON and Excel for easy downstream use
- Image extraction with PNG/JPG export
- Automatic delivery via Google Drive and email
- Deployed on an Ubuntu remote desktop server for reliability
- Tech Stack:** Python, Selenium, BeautifulSoup, Pandas, MongoDB, Linux
eBay Price Comparison Scraper
Web ScrapingA Python scraper that pulls product listings from eBay (titles, descriptions, IDs, prices, conditions) and benchmarks them against client-supplied competitor prices. The script runs every four hours and produces a comparison spreadsheet that is shared automatically with the client. Outcome: Gave the client a continuously refreshed view of competitive pricing across eBay and partner sites without any manual collection effort.
Key Features
- Multi-field extraction across product categories
- Side-by-side price comparison against external sources
- Scheduled four-hour runs with delivery via email and Google Drive
- Structured CSV/Excel output for analytics teams
- Anti-bot handling with rotating proxies and request throttling
- Tech Stack:** Python, Selenium, BeautifulSoup, Pandas, Linux
Wine-Searcher Pricing Scraper
Web ScrapingAn automated Python scraper that collects wine names, providers, and prices from Wine-Searcher and identifies the cheapest provider for each wine. Human-like browsing behavior is simulated to avoid detection, and a fresh comparison sheet is produced every four hours. Outcome: Delivered a reliable lowest-price-per-wine report that the client uses to drive sourcing decisions, with no manual lookups required.
Key Features
- Cross-provider price comparison per wine
- Anti-detection techniques to handle bot mitigation
- Scheduled runs with automated email/file-sharing delivery
- Structured spreadsheet output ready for analysis
- Tech Stack:** Python, Selenium, BeautifulSoup, Requests, Pandas
TripAdvisor Data Scraper
Web ScrapingA high-throughput Python scraper that collects TripAdvisor listings, descriptions, and images at scale. Text is stored in PostgreSQL with categorized location tables, while images are uploaded to Dropbox. Rotating proxies enable parallel collection across many regions. Outcome: Built a continuously refreshed TripAdvisor dataset that the client uses for travel-content and location-intelligence products.
Key Features
- Parallel scraping with rotating proxies for large-scale extraction
- Categorized PostgreSQL schema for fast querying
- Image pipeline with automatic upload to Dropbox
- Scheduled incremental updates for freshness
- Robust error handling, retries, and run logs
- Tech Stack:** Python, Selenium, BeautifulSoup, PostgreSQL, Dropbox API
MassTimes Church Directory Scraper
Web ScrapingA Python scraper that extracts church information and mass timings from masstimes.org. It iterates through US ZIP codes to substitute for the missing site-wide search, runs in parallel for speed, and stores results as JSON files synced to Google Drive. Outcome: Produced a complete, structured dataset of US churches and mass times that powers the client's downstream directory product.
Key Features
- ZIP-code-driven search to bypass missing site search
- Parallel processing for fast nationwide coverage
- JSON output with Google Drive sync
- Rotating proxies to avoid rate limiting
- Scheduled runs from an Ubuntu server
- Tech Stack:** Python, Requests, BeautifulSoup, Linux
Charleston Diocese Directory Scraper
Web ScrapingA Python scraper that collects church information, mass timings, and clergy assignments from directory.charlestondiocese.org. Parallelization is used to handle the large directory, and church images are downloaded and renamed per the client's naming convention. Outcome: Delivered a clean, ready-to-use dataset and image library for the client's diocesan directory product.
Key Features
- Full extraction of church profiles and mass schedules
- Image extraction with client-specific naming convention
- JSON output synced to Google Drive
- Rotating proxies for stable, parallel collection
- Tech Stack:** Python, Requests, BeautifulSoup
Grocery Store Web Scraper (Spoonful Inc.)
Web ScrapingA large-scale data extraction pipeline built for Spoonful Inc. to centralize product data from major grocery retailers (Kroger, Walmart Food, Tesco, Tesco.ie, Woolworths). The scraper captures ingredients, allergens, nutrition facts, and product metadata while complying with each site's access structure. Outcome: Automated multi-region grocery data collection at 99% accuracy, replacing manual research and powering Spoonful's analytics and price-comparison features.
Key Features
- Custom scrapers for five major grocery retailers
- Filters to exclude non-food items and keep data relevant
- Anti-bot handling with rotating proxies, NordVPN, and Anti-Captcha
- API reverse-engineering for higher speed and accuracy
- Standardized GTIN/UPC fields across all sources
- Clean CSV/Excel deliverables with retry, logging, and rate-limiting
- Tech Stack:** Python, Scrapy, BeautifulSoup, Selenium, Requests, Pandas
CSFloat Skin Trading Bot
Web ScrapingA Python trading bot for CSFloat that monitors live listings, detects underpriced skins using configurable thresholds (wear, float value, rare patterns, stickers), executes purchases, and relists items at competitive prices once trade holds expire. Outcome: Delivered a fully automated trading workflow that captures profitable listings around the clock, with configurable risk and strategy controls.
Key Features
- Real-time monitoring of CSFloat listings
- Smart pricing analysis using float, rarity, and similar-listing comparisons
- Configurable thresholds (e.g., minimum 10% discount before purchase)
- Automatic relisting after trade hold expiry
- Designed for extension to additional marketplaces
- Tech Stack:** Python, Selenium, BeautifulSoup, Pandas, NumPy, MongoDB, FastAPI, Redis, Docker, AWS
Saudi Car Marketplace Scraper
Web ScrapingA continuous scraping system that extracts car listings, pricing, and images from the major Saudi marketplaces Dubizzle.sa, Syarah, and Haraj. Listings are filtered for validity and synced into a Supabase/PostgreSQL database that powers the client's React-based comparison platform. Outcome: Provides the client's comparison platform with real-time, validated listings from across Saudi Arabia's largest car marketplaces.
Key Features
- Continuous scraping with real-time database updates
- Validation filters for price, condition, and availability
- Image extraction tied to structured listing records
- Direct Supabase/PostgreSQL integration
- Designed to scale across additional marketplaces
- Tech Stack:** Python, BeautifulSoup, Selenium, Supabase, PostgreSQL
Automated Job Application System
Web ScrapingA Python automation that submits job applications across major ATS platforms — Workday, Greenhouse, ATS Ripple, and ASBQ Jobs. The system fills application forms, uploads resumes, and submits with minimal human intervention, with dynamic field detection per portal. Outcome: Cut hands-on application time dramatically and produced a working prototype across the most common ATS platforms.
Key Features
- Form filling and submission across multiple ATS portals
- Dynamic field detection adapts to each portal's structure
- Automatic resume upload
- Configurable filters and submission rules
- Detailed logging and error handling for stability
- Tech Stack:** Python, Selenium, BeautifulSoup, FastAPI
Court Case Data Extraction & Lead Enrichment
Web ScrapingAn automated system that scrapes court case records from US county websites, identifies commercial contract cases via keyword and category filters, and enriches them with verified mobile and email contacts using AccurateAppend and Enformion APIs. Output is delivered as clean Excel datasets for compliant SMS and email outreach. Outcome: Reduced manual case review time by over 80% and improved contact accuracy for downstream SMS campaigns.
Key Features
- Automated scraping of county court databases
- Keyword- and category-based case filtering
- Phone and email enrichment via third-party APIs
- Validation and consistency checks for data integrity
- Timezone-aware (EST) scheduled execution
- Clean Excel datasets ready for analysis
- Tech Stack:** Python, Selenium, BeautifulSoup, Requests, Pandas, FastAPI
MCA Court Case Automation
Web ScrapingA Python automation that identifies and processes Merchant Cash Advance (MCA) cases from public court databases. The system filters for relevant cases, enriches them with verified business contacts, and delivers daily lead reports by email. Outcome: Delivered a hands-off lead generation pipeline producing verified MCA business leads daily at 9 AM EST.
Key Features
- Automated court case extraction with MCA-specific filtering
- People-search API integration for contact enrichment
- Daily scheduled runs on a server (9 AM EST)
- Excel email reports with built-in error handling
- Tech Stack:** Python, Requests, Pandas, SMTP
CRM-to-Kixie Lead Sync Bot
Web ScrapingA Python automation that pulls leads from the client's CRM, organizes them by type and ZIP code, generates per-segment CSV files, and uploads them into Kixie. A second module extracts verified phone numbers from PlanetAltig while skipping duplicates and previously processed leads. Outcome: Eliminated manual data transfers between CRM and Kixie, with verified secondary phone numbers improving outbound contact rates.
Key Features
- Automated lead extraction from the source CRM
- Segmentation by type and ZIP code, exported as CSVs
- Direct upload and sync into Kixie
- Deduplication against previously processed leads
- Secondary phone number enrichment via PlanetAltig
- Tech Stack:** Python, Selenium, Requests, Pandas
Real-Time Stock Momentum Web Scraper
Web ScrapingA real-time stock data system built around StockTitan.net's Gold Membership stream. Python scripts extract live market data (symbol, price, volume, % change, float) every few seconds, a custom dashboard displays the top 10 momentum stocks, and a text-to-speech module announces the latest symbol — all suitable for live YouTube broadcasts. Outcome: Delivered a low-latency dashboard (under 1.2 seconds end-to-end) with live audio announcements that the client now uses for real-time market broadcasts.
Key Features
- Live scraping of StockTitan's Gold Membership stream
- Dynamic dashboard refreshing every few seconds
- Voice synthesis announces the latest stock symbol
- Maintains a rolling list of the 10 most recent entries
- Sub-1.2-second end-to-end latency
- Setup guide and tutorial video included
- Tech Stack:** Python, Selenium, BeautifulSoup, Flask, gTTS, HTML, CSS, JavaScript
Web Automation Data Extraction (Betting Platform)
Web ScrapingPython scripts that automate data extraction from a client betting site using its underlying APIs. The scripts accept betting codes as input and return structured data, three previously-undocumented APIs were extracted from the site, and several existing code bugs were fixed. Outcome: Gave the client a fully working extraction toolchain with reliable structured output and minimal supervision required.
Key Features
- Automated retrieval via client-provided APIs
- Discovery and integration of three internal APIs
- Bug fixes and reliability improvements to existing code
- Local and server-deployable execution
- Tech Stack:** Python, Requests, BeautifulSoup, Selenium, Pandas
MongoDB & Neo4j Query Optimization
Web ScrapingDesigned and optimized MongoDB and Neo4j queries for a 14-exercise client engagement, including dataset imports, query verification, and full written documentation explaining each query and its results. Outcome: Delivered all 14 exercises on time with verified results and clear documentation, meeting the client's expectations.
Key Features
- Python scripts for MongoDB and Neo4j queries
- Data imports into Neo4j AuraDB and MongoDB
- Debugging and tuning for accurate output
- Written documentation and explanations for every exercise
- Tech Stack:** Python, MongoDB, Neo4j
Backend API Extraction & Automation
Web ScrapingAutomated data retrieval from client websites by reverse-engineering their backend APIs. Network traffic was monitored to identify endpoints, then custom Python scripts were built to fetch, clean, and process the responses. Outcome: Provided the client with continuous, reliable data extraction integrated directly into their backend workflows.
Key Features
- Three private APIs identified through traffic monitoring
- Custom Python scripts for requests, parsing, and cleanup
- Web scraping and data processing across multiple sources
- Reliable, ready-to-consume output for client applications
- Tech Stack:** Python, Requests, Selenium
Frontend UI for Amazon Scraper
Web ScrapingA lightweight HTML/CSS/JavaScript frontend that wraps a previously delivered Python Amazon scraper. The interface lets the client add or remove product IDs and trigger scraping runs, with the new scraper extracting product data automatically. Outcome: Gave the client a usable UI on top of the scraper, replacing manual config files and making day-to-day product management much faster.
Key Features
- Clean frontend for product ID management
- Direct integration with the existing backend script
- Custom Amazon scraper for automated data extraction
- Fully tested end-to-end workflow
- Tech Stack:** Python, HTML, CSS, JavaScript
Ready to Start Your Next Project?
Let's discuss how we can bring your ideas to life with cutting-edge technology