Our Portfolio

Discover our innovative projects spanning AI, ML, LLMs, automation, and more

50+ Projects Completed
25+ Happy Clients
7+ Years Experience
500+ Citations

Google Scholar Profile

This project develops a web-based application utilizing Python 🐍 and Django ⚙️ to facilitate collaboration among researchers by indexing and searching scholarly articles. The system leverages Google Scholar's 📊 API to retrieve relevant data, providing personalized search results based on user preferences. Additionally, the application incorporates natural language processing techniques using NLTK 📝 to extract relevant information from article abstracts and keywords.

Key Features
  • Web-based application for scholarly article indexing and search 🔍
  • Integration with Google Scholar API for data retrieval 📊
  • Natural Language Processing (NLP) for abstract and keyword extraction 📝
  • Language Used: Python 🐍
  • Framework Used: Django ⚙️
  • Library Used: NLTK 📚

Efficient Urdu Caption Generation using Attention-based LSTMs

We developed an Attention-based LSTM algorithm 🐍 for generating captions in Urdu for provided images. This project involved creating a dataset of images with Urdu captions and training the model for caption generation, resulting in a successful online implementation.

Key Features
  • Deep Learning Methodology: Attention-based LSTM 🤖
  • Plug-n-play approach - Model available online 💻
  • Language Used: Python 🐍
  • Framework Used: Tensorflow 🔥
  • Database Used: Google Sheets + Google Storage 📊

LoRaDRL: Deep Reinforcement Learning Based Adaptive PHY Layer Transmission Parameters Selection for LoRaWAN

We propose and evaluate a deep reinforcement learning (DRL 🤖)-based algorithm for PHY layer transmission parameter assignment in LoRaWAN, utilizing LoRa as its physical layer. The existing rule-based algorithms contribute to packet collisions, degrading the performance of densely-deployed low-power wide-area networks (LPWANs). Our DRL-based algorithm ensures fewer collisions and better network performance compared to state-of-the-art algorithms for LoRaWAN, achieving up to 500% improvement in PDR in some cases. This research was published in IEEE Local Computer Networks (LCN) 2020 https://scholar.google.com/citations?view_op=view_citation&hl=en&user=YHSAFvEAAAAJ&citation_for_view=YHSAFvEAAAAJ:UeHWp8X0CEIC. A continuation article is available at [https://scholar.google.com/citations?view_op=view_citation&hl=en&user=YHSAFvEAAAAJ&citation_for_view=YHSAFvEAAAAJ:2osOgNQ5qMEC](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=YHSAFvEAAAAJ&citation_for_view=YHSAFvEAAAAJ:2osOgNQ5qMEC].

Key Features
  • Deep Reinforcement Learning Methodology: DDQN (Double Deep Q-Networks) 🤖
  • Language Used: Python 🐍
  • Framework Used: Tensorflow 🔥
  • Reinforcement Learning environment was also developed in Python 🐍

Examining Machine Learning for 5G and Beyond through an Adversarial Lens

Recent advances in deep learning have sparked significant excitement around harnessing rich information hidden in large data volumes and tackling complex problems. This enthusiasm has led to tremendous interest in applying artificial intelligence/machine learning (AI/ML) based network automation, control, and analytics in 5G and beyond. This article presents a cautionary perspective on AI/ML use in the 5G context by highlighting the adversarial dimension across various machine learning types, including supervised 📊, unsupervised 🤖, and reinforcement learning 🏋️‍♀️. The article supports this through three case studies and discusses approaches to mitigate this adversarial ML risk. Additionally, it offers guidelines for evaluating the robustness of ML models and highlights issues surrounding ML-oriented research in 5G more generally. This research has been published in a top conference (IEEE Internet Computing 2021): https://ieeexplore.ieee.org/abstract/document/9314254/.

Key Features
  • Language Used: Python 🐍
  • Framework Used: Tensorflow 🔧
  • The whole environments for the adversarial attacks were developed in Python 🐍

Developed a Deep Reinforcement Learning based algorithm to play small games like Flappy Birds and Lunar Lander.

Developed a DQN (Python 🐍) and DDQN (Python 🐍)-based Deep Reinforcement Learning algorithm to play Flappy Birds and Lunar Lander games. The algorithms successfully played those games for an extended period, achieving high scores.

Key Features
  • Language Used: Python 🐍
  • Framework Used: Tensorflow 🔧
  • Algorithm Used: DQN & DDQN ⚔️

Developed CNN-based algorithms for object classification using Python and Tensorflow. Furthermore, fine-tuned these algorithms for several other objects, and they worked perfectly.

We developed computer vision-based algorithms utilizing Convolutional Neural Networks (CNN) 📊 to classify objects and numbers into predefined categories and sub-categories. Furthermore, we refined these trained models for additional object classification, yielding notable outcomes.

Key Features
  • Language Used: Python 🐍
  • Framework Used: Tensorflow 💻
  • Algorithm Used: CNNs + DNNs 🔧

Developed a semantic matching algorithm for a large corpus of NEWS articles for matching the articles with each other and generating knowledge graphs.

Developed a semantic matching algorithm using word embeddings for a large corpus of NEWS articles 📰. The corpus was gathered from numerous NEWS websites over the internet utilizing specially designed scrapers 🕸️. The matched articles were populated in a graph database called Neo4j 📈. An API was developed to display relevant articles to the article being viewed by a client, presented as a comparison graph 🔢.

Key Features
  • Language Used: Python 🐍
  • Framework Used: Tensorflow 🔥
  • Algorithm Used: Semantic Matching using Word embeddings 💡
  • Database Used: PostgreSQL ☁️ + Neo4J 👀

Developed a K-means-based algorithm for efficient order assignment to a fleet of riders.

Developed a K-Means-based algorithm 🐝 in Python for efficiently assigning pre-defined delivery orders from various vendors to a fleet of riders, incorporating numerous business logics into the process. Leveraging this algorithm and methodology enabled the client to significantly reduce assignment time, previously handled by a team of professionals. As a result, daily order assignment and acceptance counts increased from 1000 orders per day to approximately 4000 orders per day.

Key Features
  • Language Used: Python 🐍
  • Framework Used: Scikit-learn
  • Algorithm Used: K-Means 💡
  • Database Used: MongoDB

Developed a Graph Convolution Network (GCN)-based algorithm for fraud detection.

Developed a Graph Convolution Network (GCN) 📈-based algorithm for fraud detection, utilizing an online dataset. The dataset was divided into 80% training set and 20% testing set. Initial results were considerable. To enhance the model's robustness, adversarial samples 💣 were used to attack the models, observing a reduced performance. By fine-tuning the model on these adversarial samples, robustness against similar attacks was achieved, ultimately enhancing overall performance.

Key Features
  • Language Used: Python 🐍
  • Framework Used: PyTorch 🔥
  • Algorithm Used: Graph Convolution Networks (GCN) ⚖️

Developed an algorithm for COVID detection using X-ray images.

We developed and tested various configurations of Convolutional Neural Networks (CNNs 🤖) for detecting COVID-19 from X-ray images. A binary dataset was utilized, comprising 2 classes: "Infected" and "Normal". The infected class includes chest X-rays of individuals with Pneumonia and Covid-19 infections, while the normal class consists of normal chest X-ray images. Research findings are publicly available at https://github.com/Inaam1995/MSCS18037_COVID19_DLSpring2020?tab=readme-overview.

Key Features
  • Language Used: Python 🐍
  • Framework Used: PyTorch ☁️
  • Algorithm Used: Convolutional Neural Networks (CNNs) 📊

Developed Notion Clone with AI Features and Live Collaboration

I developed a note-taking application that offers advanced features similar to those found in Notion, including "Chat to Document" functionality 📝, automated Python 🐍 "Document Summarization" capabilities 🔍, real-time machine learning-based "Document Translation" 🗣️, live collaboration tools 💻, and comprehensive user authentication procedures 👥.

Key Features
  • Next.JS 15 for APP Development
  • Clerk for User SignIn, SignUp, and Authentications
  • Liveblocks for live user collaboration
  • Shadcn to save time on component development
  • OpenAI for Asking questions from Document
  • Firestore for saving user and document information
  • Cloudflare Workers for AI tasks of Document Summarization and Document Translation

AI Agent-based Car Recommendation System

Developed a Python 🐍 AI-powered virtual sales advisor that utilizes scraped data from multiple car-selling platforms and integrates an intelligent recommendation engine to assist users in finding the most suitable car based on their budget, preferences, and usage needs. The system effectively processes natural language queries, filters results by multiple parameters (price, brand, mileage, condition), and provides ranked recommendations with alternatives when exact matches are unavailable. It also supports English 🇬🇧 and Arabic ⏰ scripts queries, and is designed to scale for multiple car marketplaces. Outcome: A scalable recommendation agent that improves car discovery by turning user preferences into actionable, ranked suggestions.

Key Features
  • Natural language query understanding with multi-parameter filtering
  • Ranked recommendations with fallback alternatives
  • Marketplace-ready data ingestion from scraped sources
  • Tech Stack:** Python, OpenAI GPT-4, LangChain, Selenium, BeautifulSoup, FastAPI, RAG, Docker, AWS, PostgreSQL, VPS

AI/LLM LinkedIn Thought Leadership & Personal Branding Content

Utilizing Microsoft Word 📄, a series of LinkedIn posts was developed to enhance the client's personal brand, establish thought leadership, and attract potential leads. The posts were grounded in factual, verifiable information and showcased the client's distinctive writing style across economics, data analytics, leadership, growth, and sustainable development. Outcome: Ready-to-publish, brand-consistent content aligned with the client’s expertise and optimized for engagement and lead generation.

Key Features
  • Crafted engaging posts with catchy hooks and clear takeaways
  • Incorporated relevant industry statistics and insights
  • Maintained a positive, professional tone with accessible language
  • Included personal anecdotes and actionable advice to drive engagement
  • Followed strict content guidelines and structured formatting
  • Added targeted hashtags for LinkedIn visibility
  • Optimized for mobile readability and retention
  • Tech Stack:** OpenAI GPT-4, LLM, NLP, LangChain, Python

RAG System for Suggesting Medical Procedures from Client Queries

Developed a system that interprets medical questions and provides accurate procedure suggestions utilizing a Retrieval-Augmented Generation (RAG) pipeline. The solution integrates trusted medical sources with Large Language Model (LLM 🤖) technology to generate evidence-based recommendations supported by citations. Outcome: A clinical support tool that provides ranked procedure suggestions with evidence citations and contraindication alerts. (Note: For decision support only. Requires healthcare professional verification.)

Key Features
  • Medical query processor and intent classifier
  • Vector database of clinical guidelines and research
  • RAG pipeline with hybrid search
  • Secure API with safety filters
  • Demo interface (optional)
  • Tech Stack:** LangChain, Open-source LLM (Llama/Mistral), BioBERT embeddings, Python, FastAPI, Docker, Cloud deployment

Developed REST APIs for an Order Management System

I designed and implemented a comprehensive backend system for REST APIs within an Order Management System, utilizing MongoDB ⏹️ as the primary database. The system was deployed on Heroku 🚀. The REST API suite includes: * Order Creation API * Order Editing API * Order Update Manager API * Single Order Display API * Multi-Order Display API with advanced filtering capabilities * Order Assignment to Fleet API * Eligible Rider Finding API * Order Status Management APIs

Key Features
  • Complete REST APIs for user management 📊
  • Automated management scripts that run regularly to maintain the documents in the database 💻
  • Language Used: Python 🐍
  • Reporting: Slack + Sentry 📨🔥
  • Authentication Used: JWT Authentication 🔒
  • Deployed to: Heroku using Github ☁️
  • Framework Used: Django REST Framework 🏰
  • Database Used: MongoDB (the queries were written efficiently to keep the database optimized) 🐌
  • Queue Management: Celery + RabbitMQ 🐰💨
  • API Management and Testing: Hopscotch 🔮
  • System Dynamic Scaling: Dynoscale ⚖️

Developed REST APIs for a Fleet Management System

As a technical lead, I developed a comprehensive backend system for a Fleet Management System's REST APIs. This endeavor included designing and implementing REST APIs for a mobile application 📱, driver portal 📊, onboarding panel ✈️, and driver management portal 👥. The schema was designed for required databases and tables 📈, utilizing MongoDB 🐒 as the primary database. The backend was deployed on Heroku 🔩. Key APIs developed include: * Order Status Update API * Single Order Display API * Multi-Order Display API * Order Assignment to the Fleet API * Driver Onboarding/SignUp APIs * Driver Management APIs (Update Rider Details) * Driver Blocking/Unblocking/Unavailability APIs * Location Tracking APIs

Key Features
  • Complete REST APIs for user management 📊
  • Automated management scripts that run regularly to maintain the documents in the database 💻
  • Language Used: Python 🐍
  • Reporting: Slack + Sentry 📣
  • Authentication Used: JWT Authentication 🔒
  • Deployed to: Heroku using Github ☁️
  • Framework Used: Django REST Framework ⚔️
  • Database Used: MongoDB (the queries were written efficiently to keep the database optimized) 💪
  • Queue Management: Celery + RabbitMQ 🐰
  • API Management and Testing: Hopscotch 🎲
  • System Dynamic Scaling: Dynoscale ⚡️

Developed REST APIs for an AI Assistant Mobile APP

I developed a comprehensive backend system for REST APIs supporting an AI assistant mobile app, utilizing AWS DynamoDB 🚀 and deploying on AWS Lambda ⏰. The system includes designing database schemas and tables. The REST API suite comprises: * User SignUP API * User SignIn API * User Conversation API * User Forgot Password API * Get all conversations API * Get the latest active conversation API * Upload Image API 🔗 * Upload Audio API 🎵 * Upload Video API 📹 * Live Video Socket Implementation ⏺

Key Features
  • Complete REST APIs for user's data management
  • Automated management scripts that run regularly to maintain the data in the database
  • File Storage Used: AWS S3
  • Email Service Used: AWS SES (Simple Email Service)
  • Authentication Used: JWT Authentication
  • Deployed to: AWS Lambda (developed routing over there to manage this)
  • Framework Used: Customized Python Script for Amazon
  • Database Used: DynamoDB (the queries were written efficiently to keep the database optimized)
  • System Dynamic Scaling: AWS Load Balancer
  • Routing Used: AWS CloudFront + Route53

Developed a custom bot to regularly check for hotels at good prices in specified places from “marriott.com”.

I developed a custom script using Python 🐍 to retrieve hotel pricing data from Marriott's website (www.marriott.com 📊) at specified locations and timings. The script generated a table of prices for hotels near the specified location and formatted it in HTML format before sending it to the client via email. To populate the specified locations, I utilized a MongoDB database 💻 with a custom frontend built specifically for this purpose. The script involved performing specific steps to reach the required data, including updating filters as needed during the process.

Key Features
  • Automated script that runs at specified hours 🕰️
  • Deployed to Remote Desktop Server using Ubuntu 💻

Developed a custom bot to regularly check for the top 100 crypto USDT trading pairs from Binance.

This project entailed developing a custom script using Python 🐍 to periodically gather and update information on the top 100 USDT trading pairs, along with their corresponding pricing data, from Binance's main website 🏗️. The extracted list was then utilized by a bot that regularly monitored price fluctuations in these trading pairs.

Key Features
  • Automated script that ran at regular intervals
  • Deployed to Remote Desktop Server using Ubuntu 🐧

Developed a custom bot to extract PDFs from a specific website.

Here is the rewritten project description: This project involved designing and implementing a custom script to automate document updates on a website 🌐. The process entailed searching for a company, navigating to its respective page, accessing the document directory, and downloading each PDF file 🔴 individually. Following download, the files were hashed and verified for duplicates. To evade detection by the website's bot mechanism, NordVPN 🛡 was utilized to regularly update our bot's location.

Key Features
  • Automated script that ran at regular intervals
  • Deployed to Remote Desktop Server using Ubuntu
  • File sharing through Dropbox and Mega
  • Regular updation of location using NordVPN

Scraped Amazon to Compare Prices with Other Websites

A Python 🐍-based automated script was designed and developed to scrape specific product categories from Amazon's website, collecting relevant information such as names, descriptions, product IDs, prices, and more. The pricing of these products was then compared to that provided by the client for similar products on other websites. A comprehensive spreadsheet is generated, detailing the different prices of the same product across Amazon and other websites. The script runs every four hours, producing a new spreadsheet that is transmitted to the client via email or file-sharing platforms.

Key Features
  • Automated script that runs on regular intervals 💻
  • Deployed to Remote Desktop Server using Ubuntu 🐧
  • File sharing through Google Drive 📁
  • Data extraction in JSON files 📊
  • Image extraction into PNG/JPG files 📸
  • File sharing through Google Drive ☁️

Scraped eBay to Compare Prices with Other Websites

A Python 🐍 script was developed to automate data collection from specific product categories on eBay's website, extracting essential details such as names, descriptions, product IDs, prices, and more. The collected pricing data was then compared with that of products on other websites provided by the client. A comprehensive spreadsheet was generated, featuring the different prices of the same product across eBay and other platforms. The script runs every four hours, producing a new sheet and transmitting it to the client via email or file-sharing tools.

Key Features
  • Machine Learning 🤖
  • Natural Language Processing 💬
  • Computer Vision 👀
  • Deep Learning 🔥
  • TensorFlow 🐲
  • PyTorch ⚡️
  • AWS ☁️
  • Google Cloud Platform 🌫️
  • Microsoft Azure 🏢
  • Docker 🐳
  • Kubernetes 🕸️
  • Java 💻
  • Python 🐍

Scraped www.wine-seacher.com to get wine pricing

This project entailed developing an automated script in Python 🐍 that scraped wine pricing information from the wine-seeker's website. The script collected names, prices, and providers of various wines, then compared their prices across different providers on the same website. A spreadsheet was generated to display which wine is offered by which provider at the cheapest price. The script runs every four hours, creating a new sheet and sending it to the client via email or file-sharing platforms 📨. To circumvent the website's bot detection mechanism, trial-and-error techniques were employed to simulate human-like behavior, thereby evading detection.

Key Features
  • Machine Learning 🤖
  • Natural Language Processing 💬
  • Python 🐍
  • TensorFlow ☕️
  • Keras 🎯
  • AWS ☁️
  • Google Cloud Platform 🌫️
  • Azure 🏙️
  • Docker 🚣‍♀️
  • Kubernetes 🐳

Scraped TripAdvisor to get all their data, including images and customer reviews

This project entailed developing an automated script that efficiently scraped TripAdvisor data using Python 🐍. The extracted textual information was stored in a PostgreSQL database 📊, while categorized location data was organized into separate databases. Images were uploaded to Dropbox 💾. The script operates on a regular schedule, continuously updating the dataset with new locations and images. To facilitate efficient scraping of large datasets, rotating proxies ⚡️ were utilized to enable parallel data collection.

Scraped masstimes.org to get all their data, including church information and mass timings.

Using Python 🐍, an automated script was developed to scrape relevant data from masstimes.org. Extracted information about churches, including mass timings, was formatted into JSON files. To facilitate data extraction, US Zip codes were utilized to simulate searches on the website, as no central search methodology was provided. Parallelization techniques were employed to efficiently process the large count of churches and maximize speed. The extracted JSON files were then saved on Google Drive 📁. Rotating proxies were used to enable parallel data extraction due to the large dataset size.

Key Features
  • Automated script that runs on regular intervals 💻
  • Deployed to Remote Desktop Server using Ubuntu 🐒
  • File sharing through Google Drive 📁
  • Rotating proxies to protect the bot from being detected by the website 🔒

Scraped directory.charlestondiocese.org to get all their data, including church information, mass timings, information regarding the assignments, and images of the churches.

A Python script was developed to scrape data from directory.charlestondiocese.org, extracting comprehensive information about churches, including mass timings and assignment details, into JSON files. To optimize processing efficiency for a large number of churches, parallelization techniques were employed. The extracted JSON files were saved on Google Drive 📁, a file-sharing platform. Furthermore, images were extracted from respective churches and stored according to the client-provided naming convention.

Key Features
  • Data extraction in JSON files 📊
  • Image extraction into PNG/JPG files 📸
  • File sharing through Google Drive ☁️
  • Rotating proxies to protect the bot from being detected by the website 🔒

**Web Scraping for Grocery Store Websites: Spoonful Inc**

I collaborated with Spoonful Inc. to develop a large-scale data extraction pipeline 🚀 using Python 🐍 for major grocery store websites, enabling the client to centralize product data for market analysis and inventory management. The objective was to accurately extract detailed product information, including ingredients 👩‍🍳, allergens ⚠️, nutrition facts 🥑, and product metadata 💻, while ensuring compliance with each site's access structure 🔒.

Key Features
  • Built custom Python scrapers (Scrapy, Selenium, BeautifulSoup) for dynamic grocery websites.
  • Extracted structured data from Kroger, Walmart (Food), Tesco, Tesco.ie, and Woolworths.
  • Implemented filters to exclude non-food items and ensure data relevance.
  • Managed anti-bot measures using rotating proxies, NordVPN, and Anti-Captcha API.
  • Used API reverse-engineering and request tracing to improve speed and accuracy.
  • Maintained data integrity by standardizing fields like GTIN/UPC across datasets.
  • Deliverables:**
  • Delivered clean, structured CSV/Excel datasets for all sites.
  • Ensured 99% accuracy with retry, logging, and rate-limiting systems.
  • Provided regular updates and samples before full-scale runs.
  • Tech Stack:** Python, Scrapy, BeautifulSoup, Selenium, Requests, Pandas, CSV/Excel, NordVPN, StormProxies, AntiCaptcha API
  • Outcome:**

**Automated CSFloat Skin Trading Bot**

Developed a Python-based trading bot for CSFloat that monitors listings, identifies underpriced skins using configurable thresholds, executes automated purchases, and relists items for profit. Integrated smart analysis for float value, rarity, and pricing, providing a fully automated, efficient trading workflow. Scope of Work: Deal Monitoring & Analysis: Monitors active CSFloat listings to detect underpriced skins by comparing each listing against similar items (considering wear, float value, rare patterns, and stickers). Profitability Logic: Applies configurable thresholds, e.g., only purchasing items at least 10% cheaper than the next lowest comparable listing. Automated Trading: Automatically purchases selected skins and relists them on CSFloat at competitive, profitable prices once trade hold periods expire. Custom Configuration: Supports user-defined parameters for price thresholds, specific skin types, and trading rules. Scalability: Designed to integrate additional trading logic or marketplaces in the future.

Key Features
  • Real-time monitoring of CSFloat listings.
  • Smart analysis of pricing, rarity, and float values.
  • Configurable thresholds for automated decision-making.
  • Automatic relisting post-trade hold for optimized profit.
  • Minimal manual intervention required for trading workflow.
  • Outcome:** Delivered a reliable, automated skin trading solution that reduces manual effort, ensures timely purchases of profitable listings, and increases trading efficiency while providing configurable settings for risk management and strategy optimization.
  • Tech Stack:** Python, Selenium, BeautifulSoup, Pandas, NumPy, MongoDB, FastAPI, Redis, Docker, AWS.

Developed a car scraping system to extract and update car listings from the major Saudi car marketplaces **Dubizzle.sa**, **Syarah**, and **Haraj**.

Key Features
  • Continuous, automated scraping with real-time updates
  • Data filtering for valid listings (price, condition, availability)
  • Image scraping and structured data integration
  • Supabase/PostgreSQL database connection for seamless data flow
  • Scalable for integration with multiple car marketplaces
  • Tech Stack:** Python · BeautifulSoup · Selenium · Supabase · PostgreSQL

Automated Job Application System

Utilizing Python 🐍, an automated job application system was developed to streamline the job application process across multiple platforms, including Workday 💼, Greenhouse 🌿, ATS Ripple ⚖️, and ASBQ Jobs 📊. The system automates the filling out of application forms, uploading of user resumes, and submission of applications with minimal human intervention.

Key Features
  • Automated form filling and submission workflows
  • Dynamic field detection for multiple job portal structures
  • Resume auto-upload integration
  • Configurable job search filters and submission logic
  • Logging and error-handling for stable automation performance
  • Outcome:** Successfully delivered a fully functional prototype capable of automating job submissions across major ATS platforms, significantly reducing manual effort and improving efficiency for job seekers.
  • Tech Stack:** Python, Selenium, BeautifulSoup, FastAPI, Automation Frameworks

Automated Court Case Data Extraction & Contact Enrichment System

Developed an automated system that extracts and analyzes court case records from U.S. county websites, identifying commercial contract cases and generating structured Excel reports 📊. The system leverages APIs, including AccurateAppend 📲 and Enformion 🔒, to enhance data with verified mobile numbers and emails for compliant SMS and email campaigns 📨.

Key Features
  • Automated web scraping of county court databases.
  • Smart case filtering using keyword and category recognition.
  • API integration for phone/email enrichment and validation.
  • Error-handling and data consistency checks to ensure reliability.
  • Scheduled execution with timezone-aware automation (EST).
  • Exported clean, structured Excel datasets for downstream analysis.
  • Outcome:** Successfully automated case identification and lead enrichment, reducing manual review time by over 80%. Improved contact accuracy for SMS campaigns using verified mobile data from external enrichment APIs.
  • Tech Stack:** Python, Selenium, BeautifulSoup, Requests, Pandas, FastAPI (for API integration), Cron Scheduler

MCA Court Case Automation & Lead Generation System

A technical solution was developed using Python 🐍 to automate the identification and processing of Merchant Cash Advance (MCA) court cases from public databases. The system efficiently filters relevant cases, enhances them with verified business contact information, and delivers daily lead reports via email.

Key Features
  • Automated court case extraction and MCA filtering.
  • Integrated people search APIs for contact enrichment.
  • Deployed on server with daily scheduled runs (9 AM EST).
  • Configured automated Excel email reports and error handling.
  • Tech Stack:** Python, REST APIs, Pandas, Automation, Cron Scheduling, SMTP, Server Deployment
  • Outcome:** Delivered a fully automated, reliable lead generation system that reduced manual effort, optimized data costs, and ensured daily delivery of verified MCA business leads.

CRM to Kixie Lead Automation Bot

A Python 🐍 automation bot was developed to streamline lead extraction from the client's CRM, directly uploading them into Kixie 📊. The system efficiently organized leads by type and zip code, generated separate CSV files ⚙️, and ensured seamless syncing between platforms 🔄. Furthermore, an additional module was integrated to extract verified phone numbers from PlanetAltig 💻 website while avoiding duplicates or previously processed leads 🔒.

Key Features
  • Automated lead extraction from the client’s CRM system.
  • Seamless lead upload and synchronization with Kixie.
  • Added logic to include only new leads from PlanetAltig.
  • Extracted secondary verified phone numbers for improved data quality.
  • Delivered complete Python script with setup and usage instructions for local execution.
  • Tech Stack:** Python, Selenium, Requests, Pandas, CSV Automation, Web Scraping, Kixie API Integration
  • Outcome:** Provided the client with a fully automated, efficient lead management workflow that eliminated manual data transfers, ensured real-time synchronization, and enhanced lead accuracy with verified contact details.

Live Stock Momentum Web Scraper & YouTube Broadcast Automation

A real-time stock data automation system was developed for StockTitan.net 📊, utilizing Python 🐍 scripts to extract, process, and display live market data updates every few seconds. The system fetched live data fields including stock symbol, volume, price, time, % change, and float. A custom-built web app interface powered by HTML 💻, CSS 💻, and JavaScript 💻 displayed the top 10 momentum stocks. Additionally, a voice automation module was implemented to audibly announce the latest stock symbol in real-time, enabling seamless live YouTube broadcasts.

Key Features
  • Built a web scraper to extract live data from StockTitan’s Gold Membership stream.
  • Developed a dynamic web dashboard that updates every few seconds.
  • Added voice synthesis to announce the latest stock symbol aloud.
  • Implemented logic to maintain only the 10 most recent entries.
  • Optimized for minimal delay (\<1.2s) between site data and display output.
  • Provided complete setup guide and a tutorial video for deployment.
  • Tech Stack:** Python, Selenium, BeautifulSoup, Flask, gTTS (Google Text-to-Speech), HTML/CSS, JavaScript, WebSocket, Automation Scheduling
  • Outcome:** Delivered a fully automated, low-latency system that continuously streams real-time stock updates with audio announcements, enabling the client to broadcast accurate, up-to-the-second market data live on YouTube.

Web Automation & Data Extraction

Python 🐍 scripts were developed to automate data extraction from a client's website using APIs 📊. The scripts accept betting codes as input and produce structured data, ensuring accurate and reliable results. Furthermore, three key APIs were extracted from the site, and existing code bugs were corrected for seamless operation 💻.

Key Features
  • Automated retrieval of betting data via client-provided APIs.
  • Extracted and integrated three APIs for seamless access.
  • Debugged and optimized existing code for reliability.
  • Provided instructions for running scripts locally or on a server.
  • Tech Stack:** Python, Requests, BeautifulSoup, Selenium, Pandas.
  • Outcome:** The client received a fully functional automation solution, enabling fast and accurate data extraction, reducing manual effort, and ensuring the scripts run reliably with minimal supervision.

Database Queries & Optimization

Developed and optimized database queries using MongoDB 🐍 and Neo4j 🔥 for a client's assignment exercises. The project scope included editing and implementing 14 exercises, automating data imports, and verifying accurate query results. Additionally, comprehensive written documentation was prepared to accompany the queries.

Key Features
  • Developed Python scripts for MongoDB and Neo4j queries.
  • Imported and managed datasets in Neo4j AuraDB and MongoDB.
  • Debugged and optimized queries for accurate results.
  • Prepared the written documentation and explanations for all exercises.
  • Ensured solutions were authentic, clear, and verifiable.
  • Tech Stack:** Python, MongoDB, Neo4j AuraDB, PyCharm.
  • Outcome:** The client received fully functional scripts and documentation, enabling smooth execution of all queries with correct outputs. All deliverables were completed on time and met the client’s expectations.

Backend API Extraction & Automation

The project entailed automating data retrieval from client-shared websites utilizing backend APIs 🐍. Through monitoring website traffic, identifying pertinent API endpoints, and crafting custom Python 🐍 scripts, I successfully fetched, cleaned, and processed the extracted data.

Key Features
  • Extracted 3 APIs from the client’s website using traffic monitoring tools.
  • Developed custom Python scripts to handle API requests, parse responses, and fix existing code issues.
  • Implemented web automation, scraping, and data processing from multiple sources.
  • Ensured reliable, accurate, and ready-to-use data outputs for client applications.
  • Tech Stack:** Python, API integration, Web Scraping, Traffic Monitoring Tools, Automation Scripts.
  • Outcome:** Client received fully functional scripts capable of continuous data extraction, processing, and integration, streamlining their backend workflows efficiently.

Frontend Interface \+ Amazon Scraper

I developed a simple and functional frontend utilizing HTML 📋, CSS 💅, and JavaScript 🐍 to integrate with a backend Python 🐍 script created in a previous order. The interface enables users to easily add and remove product IDs. Furthermore, I constructed a custom Amazon scraper to automatically extract product data as requested.

Key Features
  • Developed a clean frontend for product ID management.
  • Integrated frontend with the existing backend script.
  • Created a custom Amazon scraper for automated data extraction.
  • Delivered fully tested and functional scripts.
  • Tech Stack:** HTML, CSS, JavaScript, Python.
  • Outcome:** The client received a streamlined interface and a reliable Amazon scraper, ensuring easy product management and smooth data extraction.

Ready to Start Your Next Project?

Let's discuss how we can bring your ideas to life with cutting-edge technology