Services that we do here to explain
Read case studies
Betting on Precision — How Robot Automation Gave toto.bg the Winning Edge
1. Executive Summary
Online sports betting is a world of high stakes with high speed. Here, every second counts, and that’s why every decision needs high precision. That’s what the management at toto.bg thought when they realised that repetitive manual betting selections aren’t just inefficient but risky as well.
This case study uncovers how toto.bg designed and implemented a custom-built Robot Automation solution to replace imperfect workflows with a swift and fully automated system. The system involves integrating real-time data via API, allowing seamless CSV uploads and executing betting logic with algorithmic accuracy. The enhancement helped toto.bg reduce human error significantly and boost operational speed. Lastly, it helped unlock scalable efficiency, which is a bonus.
2. Introduction
What seems manageable in the betting world, got complicated after expansion of the betting world, which demanded more stringent rules, putting a pressure on the management to set everything right. It became tough for the toto.bg also to track fast-moving games, loads of data and endless choices happening simultaneously.
Imaging sorting through all the information manually: to track live matches, read user strategies, check the odds and input everything precisely- repeatedly. It not only slowed down the process, but came with more stress, tiresome and messy.
toto.bg was seeking a suitable solution to address this issue.
Enter: Robot Automation.
It enabled coupling user logic with real-time sports data, transforming the arduous manual betting tasks into a sleek automated process.
3. Project Overview
Project Name: Robot Automation
Objective: Automate betting selections to optimise speed, precision, and scalability
Key Features:
- Real-Time API Integration: Pulls live match data from toto.bg
- CSV-Based User Interface: Allows users to upload their betting strategies with drag-and-drop ease
- Automated Decision Engine: Executes betting choices with zero manual input

10. Conclusion & Future Outlook
Robot automation began as a tool; however, it has ultimately evolved into a transformative technology. It became more than an optimiser. It showcased how intelligent workflows can pinpoint accuracy, scale effortlessly and empower business users to take control.
We are now exploring more possibilities with Robot Automation. These include:
- Predictive Betting Inputs via historical match data
- Multi-Language Interfaces for international rollout
- Cloud Deployment for real-time sync across devices.
Let’s conclude this case study with a statement: The future of betting automation isn’t just possible; it’s programmable, too.
4. Workflow & Process
Step 1: Plugging into the Pulse
To enable smart betting decision, the system was fed with what’s happening in the sports world. That’s where the toto.bg API came into the picture. It exactly works like a digital scoreboard, continuously streaming real-time information about matches, teams and betting options.
A system was integrated that connected to this stream. It collects all the live match details, and organized them neatly into options:
- 1 (home win)
- X (draw) or
- 2 (away win)
It was like giving the robot a front-row seat to watch the match, enabling it to understand what’s exactly is going on, capturing every moment. This step eliminated the need to track the results manually or type in odds. Instead, the system reads the data instantly, precisely and doesn’t miss a beat, similar to a top sports analyst who never takes a break.
Step 2: Designed for Humans
The front end is developed using CustomTkinter, which even non-techies find easy to operate. Users can upload a CSV file with their choices without needing to use Excel. Behind the curtain, the inputs are validated by the system, mapped to live data and set into the queue for automated action.
Step 3: Logic Comes to Life
In this step, the CSV meets real-time data, prompting the robot to take on the task, process logic, confirm eligibility and execute bets with algorithmic quietness. It’s like having a personal assistant who works relentlessly and yet never makes a mistake.
5. Explanation of Project
Let’s simplify the explanation by imagining the betting process as a relay race. Initially, in the manual-run process, every step, right from data review to logic matching to placing bets, was prone to errors. Robot automation replaced the entire relay time with a single ultra-efficient sprinter. As simple as that.
The automation helps harmonise the static strategy from CSVs uploaded by the users with dynamic conditions, which are pulled from live match data. Users benefit from no misclicks and no missed matches. All they get is a zero-latency logic execution.
6. Purpose & Benefits
The purpose was to enable maximum bets in the shortest possible time. That’s what management envisioned to make robot automation happen.
Here’s what made the difference:
- A faster workflow: Repetitive tasks are now completely automated, transforming tedious steps into one-click execution.
- Simplicity in uploading CSV: Allows users to upload better logic in batches.
- Live Data Smarts: Merging real-time match data with rule-based inputs helped unmatched precision.
- No Human Error: No more blunders due to manual data copy or mismatches.
- Scalability: The modular system design enables easy scalability across new games, markets or regions.
7. Marketing & Sales Objectives
Robot Automation isn’t just a tech project. It’s a living case study for intelligent automation. Here are a few outcomes:
- Exhibit Real Automation: The project demonstrates that automation is feasible for complex, rule-based tasks.
- CSV + API Versatility: This demonstrates that the system can integrate with any structured data workflow.
- Speed and Accuracy: Performance through automation was quicker, more accurate, and faster than the manual process.
- Users' Empowerment: Users benefit from the betting logic without any technical know-how.
- Positioning as a Tech Partner: The automation process positions the development team as problem-solvers, not merely coders.
Imagine Robot Automation serving as a lighthouse to guide other industries out of the ocean of manual tasks.
8. Technical Stack & Use Cases
Outstanding technology is invisible to the user; however, it is powerful under the hood. Here’s some brief information about the technical stack and use cases.
- Python: It serves as the logic engine, handling API parsing, data mapping, and selection algorithms.
- Custom Tkinter: The actual front-end interface, the UI layer, provides a sleek and user-friendly desktop interface, making it easy for users to interact with.
- PyInstaller: It packages everything into a plug-and-play .exe file.
This model can also be adapted for:
- Automated invoice validation
- Real-time pricing bots
- Inventory restocking automation
- Rule-based email or alert generation

9. Results & Impact
The results were immediate and measurable once the automation was launched. Here are the results:
- Manual efforts were reduced to 75% during betting events.
- Input-to-output accuracy leapt to 99.2%, as compared to 89% in previous manual workflows.
- Execution time was reduced from minutes to milliseconds.
- Users requested the integration of the system for new sports and games within the first month.
Email Data Extraction and Lead Generation from PST Files: Turning Historical Emails into Qualified Leads with AI
Introduction
Businesses are turning the tables into data-driven models, yet they often overlook one of the richest sources of untapped lead data- email archives. Buried within old Outlook backups are potential goldmines of sales intelligence, contact information, and engagement patterns. This project focuses on unlocking potential by automating the extraction of valuable lead data from PST files using Python and artificial intelligence (AI) technologies. The results include a scalable system that not just parsed emails but enriched, validated, and prepared the data for smooth CRM integration.
Brief Description
The solution was built to mine the data, specifically the large ones from historical emails stored in PST format —Outlook’s native archive format. These archives contained years of business communication that hold valuable lead information if mined smartly.
We developed a Python-based automation tool to handle this task completely. It parsed PST files to extract email metadata and content, used artificial intelligence to interpret unstructured text, and generated organized CSV files ready for the CRM platform.
The key processes include deduplication, validation via external APIs, and filtering irrelevant or internal communication. This tool revived both forgotten email threads and active sales opportunities.

Objectives
Client-specific goals
The tool was engineered to offer several targeted objectives:
- Automatic extraction of lead data from old Outlook emails- The system eliminates the need for any data mining by parsing PST backups automatically. These backups contain thousands of emails from previous years, which, when correctly parsed, reveal valuable insights and contracts.
- Structured dataset generation for the sales team- Rather than presenting raw data, the tool structures extracted information into clearly defined fields, including names, email addresses, job titles, company names, phone numbers, and more. This allowed for a dataset that was actionable for the sales team,, providing them with the option to filter, sort, and analyze the data as needed.
- Cleaning, deduplication, and validation of extracted contacts- To ensure high data quality, duplicate contacts were removed using a session-wide comparison. Additionally, you can utilize syntax checks and blacklist filters to validate emails, enabling teams to focus solely on usable and high-value leads.

Conclusion
The project illustrates the powerful combination of AI, automation, and data validation in transforming legacy email archives into actionable lead intelligence. The use of Google Gemini for intelligent parsing and NeverBounce for email validation ensured accuracy and relevance while incorporating features such as deduplication, logging, and domain filtering.
This case is a clear example of how old communications when combined with modern technology, can fuel new opportunities and streamline lead generation workflows.

Wider Business Purposes
Beyond immediate use cases, the solution came together with broader business development goals:
- Lead generation: Identification of High-Quality, Engaged contacts
The automation helps identify contacts who have previously interacted with the business or with individuals already familiar with the company. It also focuses on leads that are likely to engage again and bring out high-value targets for outreach.
- Data enrichment: Converting unstructured email data into information
By using AI, the system adds structure and intelligence to unstructured email content. Information, such as job titles and inferred company types, turns basic emails into strategic sales leads.
- CRM Readiness: Generating importable CSVs for HubSpot/Salesforce
The organized output of data is prepared for compatibility with some of the frequently used CRM platforms. This ensures the importing process is smooth, allowing the sales team to start engagement activities without delay.
- Personalization for Outreach: Using Roles and Industry to Tailor Campaigns
Detailed job titles and company information make hyper-targeted messaging easy. For example, marketing executives can receive campaign pitches, while IT heads can get product specifications relevant to their business.
- Email validation: Improving overall deliverables via External APIs
Email validation via APIs reduces bounce rates. This improves campaign efficiency and ensures a higher sender reputation is put forward for future outreach.
- Competitor insights: Finding for company engagement
By analyzing sender domains and relevant content, the tool identifies which competitor companies are involved in previous conversations. This information informs competitive strategies and reveals better possibilities for partnership opportunities.
Technical Base
The entire system was built using modular and scalable technologies:
- Programming language: Python 3.x
The solution was developed via Python. It is chosen for its versatility, wide range of libraries, and robustness in data manipulation and automation.
- AI API: Google Gemini
The tool was integrated with Google Gemini to perform natural language parsing. This could extract names, job roles, companies, and phone numbers and infer organizational structure from contextual clues.
- Email validation: NeverBounce API
To ensure data accuracy, emails are validated through the route of NeverBounce API. This checks for deliverability, syntax correctness, and domain reputation.
- Data Storage: CSV via Pandas
Structured data was stored using Pandas DataFrames and exported as CSV files. This format facilitates universal compatibility and ease of use in CRMs.
- Logging: Custom module
A dedicated logging module was used to track every step of the extraction process- starting from successful parses to debugging.
Key features
PST Email Extraction
At the core of the tool is its ability to extract information from PST files:
- Parses .pst Outlook Backups
This tool uses a PST parser to read and iterate over each item in the backup file. This helps in navigating folders, subfolders, and threads.
- Extracts email bodies and metadata
Each email subject, body, sender and receiver metadata, and timestamp information are captured.
Filters out Internal or Irrelevant Domains
Domains like internal company emails or spam-like sources are filtered using verified and configured blacklists (failed.json).
AI-based parsing
Once raw emails are extracted, Google Gemini powered the intelligent interpretation:
- Contact names- Names are pulled from both email metadata and content, accounting for signatures and context within threads.
- Job titles- The AI reads email signatures and introductory lines to deduce any professional roles.
- Company names- It detects company names with the help of domain references, email signatures, and mentions in the content.
- Phone numbers and addresses - Contact details embedded in signatures or within emails are extracted.
- Company type inference- Based on domain names and context, the AI attempts to list information about the industry or function of the organization.
Validation and deduplication
To bring out the best output, the following process is undertaken:
- Removal of duplicate entries across various sessions - This tool maintains a cache of processed entries to prevent redundancy, even during multiple runs.
- Validate email syntax- Regex patterns check the organized level and validity of each email before moving into further processing.
- Skips blacklisted domains - This process is for internal domains that can be excluded using a configurable list. This helps focus on external leads.
Data Output
The final dataset is a highly planned and organized CSV file having-
- Cleaned output
- Fields with the rows, namely Name, Company, Job Title, Email, Website, Phone number, and other valuable attributes that assist with segmentation and targeting.
Customization and notes
- Domain filtering via failed.json
A JSON file allows dynamic updates to the domain exclusion list without changing code.
- Rate limiting with time.sleep(1)
To comply with API usage quotas, delays were added between requests to Google Gemini.
- Logging errors and duplicates
Detailed logs were used to enable traceability and help troubleshoot any skipped or failed entries,
- Future extensibility
While the current version's output CSVs are available, the architecture was designed to support direct integration with CRM APIs, such as HubSpot and Salesforce, in future iterations.

Outcome
Achievements
- Massive Time savings by processing thousands of emails in an hour.
- High-quality leads with enriched metadata to ensure data isn’t just complete but meaningful and ready for outreach.
- Focused sales efforts to ensure relevant leads were prioritized as per the high-intent contacts.
Potential ROI
- An 80-90% reduction in lead research time, resulting in a decrease in manual labor required to identify qualified leads.
- Real-time validation and domain filtering further reduced the bounce rates.
- Historical emails once considered digital clutter, are now an active resource in the business development arsenal.
Automating High-Volume Lottery Ticket Generation with the Toto-TKT Project
Overview
The Toto-TKT project involved developing an automated ticket creation system to replace manual formatting and production processes for branded lottery and raffle tickets. This project was created to read Excel data and generate structured, print-ready tickets with precise layouts, essentially functioning like an automated ticket machine for high-volume output.
The goal was to turn digital inputs into high-quality physical materials, making it easier to produce tickets for campaigns, sweepstakes, and other large-scale events.
Client Background
The client works in promotional events, where ticket-based systems are an important part of operations.
Whether running a sweepstake, community giveaway, or corporate campaign, the client frequently has to produce a large volume of tickets with custom layouts and branding elements.
Previously, these tickets were created manually by editing data in Excel and mapping it onto physical templates. This was not only labour-intensive but also inconsistent, especially at higher volumes. Small layout mistakes, printing issues, and the risk of duplication led to operational setbacks. They needed a solution that would eliminate these problems while saving time. We endeavoured to set up an automated ticket creation system.

Challenges & Objectives
The following were some challenges that the auto lottery processor faced and the resultant objectives that were formulated:
Challenges
- The manual formatting process was error-prone, especially for high-ticket volumes.
- Inconsistencies in layout often led to misalignment during printing.
- Physical ticket templates required precise mark placement to remain legible and usable.
- The ticket generation process had to accommodate varied ticket lengths and structures, depending on the campaign.
Objectives
- Develop a tool that could convert structured Excel rows into printable tickets.
- Maintain strict grid alignment for marks and numbers to match physical layouts.
- Output professional-quality PDFs that require no additional editing.
- Build a process that could be reused for future campaigns with minimal setup.

Conclusion
Relu Consultancy built an auto lottery processor that solved a clear operational pain point for the client. The Toto-TKT Project turned spreadsheet data into clean, event-ready tickets with speed and accuracy.
The system’s modular design made it easy to reuse across different types of events, including raffles, lotteries, and entry passes. The combination of precision formatting, batch processing, and user-friendly documentation made it easy for the client to adopt and scale.
Overall, the project made high-volume ticket creation faster and more consistent, helping the client deliver professional, branded materials for their campaigns.

Approach & Implementation
Relu Consultancy created a Python-based automated ticket generation system capable of reading Excel files and converting them into layout-specific, high-resolution PDFs. The system was built to handle variations in ticket structure, support branding requirements, and generate outputs ready for printing on standard A4 paper. To support adoption, the final solution included walkthrough documentation and a screen recording for internal training.
Key Features
- The script reads six-number combinations from a designated "Combination" column in Excel.
- It supports multiple formats: 10, 12, or 13 values per combination mapped precisely on a grid.
- Each ticket displays either dots or crosses based on client preference.
- Ticket numbers are rendered upside-down to suit the physical layout's readability.
- Outputs are saved as A4-sized PDFs at 300 DPI for high-quality print compatibility.
- Error handling, validation checks, and auto-foldered output directories help maintain order during large batch generation.
Common Use Cases
The Toto-TKT Project’s functionality made it suitable for a variety of event and campaign needs. Some of the most frequent applications included:
- Lottery Tickets: Generates unique combinations for official lottery draws, laid out for immediate printing.
- Raffle Entries: Creates hundreds or thousands of entries for community or commercial raffles, all formatted consistently.
- Event Entry Passes: Custom tickets with individual identifiers and formatting tailored to specific event themes.
- Survey or Exam Sheets: Marked layouts aligned with answer sheets or feedback forms, where precise placement is crucial for scanning or review.

Results & Outcomes
The automated ticket generation system delivered significant improvements across multiple areas:
- Faster Turnaround: The manual formatting process that once took hours was reduced to just minutes.
- Accuracy at Scale: Mark placements were accurate to the millimetre, helping avoid printing misalignment and formatting problems.
- Consistent Branding: Branded tickets followed a standard design across batches, improving presentation at events.
- Scalable Outputs: The client could generate large quantities of tickets in one run without worrying about duplication or formatting breakdowns.
By delivering consistent, professional tickets that could be printed and distributed immediately, the client gained greater control over promotional materials. The tool also opened opportunities for new use cases, such as interactive surveys and educational events.
Key Takeaways
Several important lessons emerged from the Toto -TKT Project:
- Automating layout-based tasks significantly reduced the risk of human mistakes and the time spent on repetitive formatting.
- Using a grid-based logic system helped maintain precise alignment across ticket designs and batches.
- Including a screen recording and walkthrough documentation made onboarding internal users simpler and more effective.
- The solution bridged digital inputs (Excel) and physical outputs (printed tickets), giving the client a reliable way to manage ticketing for events and campaigns of any size.
AI-Powered Web Scraping for Smarter Data Extraction
Introduction
Moderating and analyzing large volumes of data can be challenging, especially when information is spread across multiple sources. This project focused on developing an automated web scraping system combined with AI-driven data structuring to simplify data extraction, improve accuracy, and enhance decision-making. By leveraging web scraping with Python, SerpApi, and Selenium, alongside Gemini 1.5 Pro AI, the solution provided a structured approach to gathering and processing part number data while minimizing manual effort.
Client Background
The client needed a streamlined solution to collect and process part number data from multiple websites. Their existing method relied on manual data entry, which was slow, prone to errors, and increasingly difficult to manage as the volume of information grew. Extracting, analyzing, and organizing this data required significant time and effort, limiting their ability to make timely decisions. To address these challenges, they required automated web scraping tools capable of mechanizing these tasks while ensuring accuracy and adaptability.

Challenges & Goals
The following were some of the challenges and goals of the project:
Challenges
Collecting part number data manually was time-consuming and required a considerable amount of effort. This method not only slowed down the process but also led to inconsistencies, making it difficult to maintain accuracy.
Many websites posed additional challenges, such as requiring logins, incorporating captchas, and using dynamically loaded content, all of which complicated data extraction. These barriers made it difficult to gather information efficiently and required constant manual adjustments.
Even when data was successfully retrieved, it often lacked a structured format. This made it challenging to compare and analyze, further slowing down decision-making processes. As the need for data grew, the limitations of manual collection became even more apparent, highlighting the necessity for a more effective and scalable approach.
Goals
The first goal of the project was to create a system for web scraping using Selenium, SerpApi, and Python to collect part number data from multiple websites. By automating this process, the aim was to reduce reliance on manual entry and improve the reliability of data collection.
Another key objective was to apply AI-based processing to analyze and organize the extracted data. The system needed to identify alternate and equivalent part numbers, allowing for a more comprehensive understanding of available components and their relationships.
Ensuring data retrieval remained accurate and consistent despite website restrictions was also a priority. The question was: How to bypass captchas in web scraping? The solution also had to navigate logins and dynamically loaded content without disrupting the flow of information.
Finally, the extracted data needed to be presented in structured formats, such as CSV and Google Sheets. This would allow for seamless integration into the client’s existing workflows, making the information easily accessible and actionable.

Conclusion
This project improved how the client collects and processes data, replacing manual methods with an automated system that organizes and structures information effectively. By combining web scraping with AI, Relu Consultancy provided a reliable solution tailored to the client’s needs. The result was a more accessible, accurate, and manageable data collection process, allowing for better decision-making and reduced workload.

Implementation & Results
A custom web scraping workflow was built using SerpApi, Selenium, and Python. The system was designed to handle various website structures, extract part numbers accurately, and minimize errors. With this approach, data retrieval became faster and required less manual input.
AI-Powered Data Structuring
Once the data was collected, Gemini 1.5 Pro AI processed and structured the information. This AI-powered data extraction:
- Identified alternate and equivalent part numbers, ensuring a broader scope of data.
- Formatted the extracted information into structured files for better usability.
- Generated reports in CSV and Google Sheets, making data more accessible for analysis.
Reliable System for Long-Term Use
To maintain accuracy and consistency, the system was built to:
- Adjust to changing website structures, reducing the need for constant manual updates.
- Bypass obstacles like logins, captchas, and dynamic content without compromising reliability.
- Require minimal manual intervention while being adaptable to increasing data demands.

Business Impact
By implementing this system, the client saw significant improvements in their workflow:
- Reduced manual data collection, lowering errors and saving valuable time.
- Faster data retrieval, enabling quicker responses to business needs.
- Structured insights made data easier to analyze, improving decision-making.
- A system built to handle growing data needs, ensuring continued usability.
Key Insights
- Reducing manual processes saves time and minimizes errors.
- AI-powered structuring makes data more practical for analysis.
- Addressing website restrictions ensures reliable data extraction over time.
- Systems that adapt to growing data requirements remain useful in the long run.
Backup Email Parsing Automation System: Streamline Monitoring and Instant Access to Insights
Introduction
Reading the email and extracting the crucial data is a tedious and time-consuming task. This is especially true for IT teams that spend their valuable time going through the text-heavy backup emails received from backup and disaster recovery service providers.
The emails from the service providers usually contain critical information about the status and health of data backups. These emails help the IT team monitor the backup processes and take necessary actions if any problem is detected.
However, if any critical detail is missed during manual reading, it can lead to delayed responses, unnoticed backup failures, and potential data loss. A simple negligence can disrupt business continuity and affect infrastructure security.
With automation enhancing efficiency in traditionally time-consuming processes, Relu expert helped build a system that automated the data extraction from backup emails from service providers like Acronis and Datto.

Project Scope
The manual approach to reading emails, especially the ones that deal with critical data associated with data security and backup. However, the manual approach slows down the response time and also increases the risk of overlooking the essential details. This can lead to missed alerts and potential data protection failures.
The client faced the same issues and required an automated system to parse backup alert emails from Acronis and Datto. Acronis and Datto are well-known providers of backup, disaster recovery, and cybersecurity solutions.
Email parsing for IT management is the automatic extraction of structured data from emails and helps the teams collect specific data accurately. In this case, the business wanted an intelligent automated email extraction solution for fetching details, like backup alerts, timestamps, and backup size.

Objectives
The objective of this Acronis and Datto backup alert monitoring process is to solve common problems, as:
- Eliminate Manual Tracking of Backup Alert Emails
With the backup email parsing automation solution, the need for IT teams to sift through numerous emails is completely eliminated. The team can focus on resolving the issues rather than searching for them.
- Accurate Parsing of Critical Data
Automated solutions can parse the key details, like backup status, timestamps, and affected systems, with precision. So, it reduces the risk of human errors that happen during natural extraction.
- Filter Out Irrelevant Details From Email
The intelligent Datto backup monitoring solution smartly filters out non-essential emails, like promotional emails and routine confirmations. This allows the team to focus on important alerts that indicate potential failures or security threats.
- Filter Out Irrelevant Details From Email
The extracted backup data is stored in a centralized and structured format such that the team can easily access the data and go through it. The structured data can be used for further processing, providing actionable insights.

The Bottom Line
Manual data entry is prone to errors, and a small error in security and data backup aspects can lead to severe consequences. That’s why Relu’s automated email parsing for IT monitoring ensures that all the critical details are extracted accurately and stored in a structured manner. The stored data can be exported in other formats, like CSV, JSON, or Excel format, for further analysis. It can exported using API, like Flask or Fast API, to fetch MySQL data dynamically and present it in JSON format.
The automated email extraction solution for backup alerts provides a scalable and error-resilient framework for backup monitoring. With automated error handling, logging mechanisms, and server deployment, IT teams can keep track of everything easily with minimal intervention.

Solution
To build the automated email parsing solution for IT monitoring, Relu experts designed a platform that automates email processing, parsing, and tracking of backup alert emails from Acronis and Datto. The platform is designed using Python’s libraries and frameworks for email processing, parsing, and monitoring.
Here’s how our solution works:
- Email Processing and Parsing:
The backup email parsing automation solution fetches the backup email alerts from Arconis and Datto using Microsoft Emails API. The implemented parsing rules extract the data systematically to get the relevant details, like:
- Backup Status
- Timestamp
- Device/Server Name
- Backup Size
- Error Messages (if any)
- Backup Location
- Next Scheduled Backup
Once the processing of a set of emails is done, the email status is updated to prevent duplicate parsing.
2. Acronis Parsing
The Datto and Acronis backup alert processing solution extracted the backup job details from both the subject and the body. The key data points included the start time and end time of the backup, the duration of the backup process, and backup size.
3. Datto Parsing
For Datto emails, the Arconis and Datto backup monitoring solution extracted the essential details for accurate status tracking. The applied filters remove the irrelevant emails which do not contain the data related to backup alerts.
4. Error Handling and Logging
The custom error messages were implemented to detect and log issues, like missing or malformed backup data, email parsing failures, and connectivity or API issues. The logging mechanism implemented was designed to track the errors and debug the insights for better system maintenance.
5. Data Storage and Integration
The parsed data was stored in MySQL database, which helps the client in quick retrieval of backup history and efficient monitoring and reporting. The email records are updated with their processing status to ensure transparency.
6. Deployment and Automation
The backup email parsing automation solution is deployed on a server to ensure it remains up and running at all times. The automated scripts monitor and parse the incoming emails from Arconis and Datto in real time. It reduces the need for manual intervention.

Results & Impact
Theimplementation of backup email parsing automation transformed backupmonitoring. The system improved the team’s efficiency and productivity, whichwas affected by manually checking and reading the emails from Acronis andDatto. Manually checking each email from the stack, extracting the details, andcopying them into a database is an error-prone process.
However, this automated solution substantially reduced the manual efforts, allowing the IT team to focus more on proactive issue resolution and less on email checking.
Chrome Extension for Sports Betting Automation
Client Background
The client operates in the overseas sports betting space. Their operations involved monitoring and placing bets across several sports betting platforms, includingPS3838 and CoolBet. Previously, this process was handled manually, which was time-consuming and introduced room for human error. The client sought a sports betting software solution that could automate these actions while adapting to platform changes and maintaining data reliability.

Challenges & Objectives
The objectives and potential challenges of the project were as follows:
Challenges
- Each sports betting platform used a different site structure, which made creating a consistent automation logic difficult.
- Layouts, CSS selectors, and API structures changed frequently, often breaking existing scripts.
- Many platforms employed anti-bot systems, such as CAPTCHAs, behavioral detection, and IP restrictions.
- The client needed up-to-date and accurate data at all times to make informed decisions, which required validation and error-handling mechanisms.
Objectives
- Build a flexible Chrome extension capable of sports betting automation across multiple platforms.
- Design it to adapt quickly to frontend changes and different site architectures.
- Implement basic bot avoidance features such as proxy rotation and request timing.
- Maintain and update the system regularly to support long-term use.

Conclusion
The browser-based automation system developed by Relu Consultancy gave the client a reliable way to manage betting tasks across platforms like PS3838 and CoolBet. Each platform had its quirks, so custom scripts were built to handle the different site structures. To keep things running smoothly, the system dealt with common issues like CAPTCHAs, changing layouts, and bot detection using proxy rotation, timed requests, and fallback strategies.
The sports betting software continued to perform well even as betting platforms evolved. With steady updates and bug fixes, it reduced the amount of manual work involved and helped create a more consistent, automated process. Overall, the project showed how important it is to build adaptable tools that can grow with changing online environments.

Approach & Implementation
Custom browser scripts were developed for each supported website, allowing the extension to interact with the site’s elements as a user would. The code was structured in a modular fashion, making it easier to isolate and update individual components when a platform changed. This modularity also simplified testing and future feature integration.
A lightweight design was prioritized to ensure the extension ran smoothly on standard user systems without needing significant resources or complex setup.
The AI betting software incorporated dynamic learning mechanisms to adapt to platform changes efficiently.
Maintenance & Updates
Frequent platform updates often caused selector breakage. To address this, regular bug-fixing cycles were introduced to inspect and update affected scripts. Code refactoring accompanied these updates to maintain a clean codebase.
The selector logic was improved with strategies to handle minor layout shifts, reducing the need for constant manual changes. The automated betting strategies integrated into the system ensured that users could adjust betting logic without overhauling the software.
Anti-Bot Considerations
Several strategies were used to reduce the risk of bot detection:
- Proxy rotation was implemented to distribute traffic and avoid IP bans.
- Request timing and user-like behaviors were randomized to mimic human actions.
- Fallback mechanisms were added to maintain functionality during temporary access issues or data gaps.
Monitoring & Support
Basic logging captured session data, including timestamps, responses, and errors, enabling faster issue identification. Retry logic helped the system recover from failed or timed-out requests.
Ongoing support involved regular performance reviews, updates, and the rollout of new features in response to evolving needs. The integration of web scraping for betting sites helped ensure that real-time odds and data were always accessible.

Results & Outcomes
The Chrome extension reduced the need for manual interaction in betting tasks. Processes such as monitoring odds, placing bets, and navigating between platforms became partially or fully automated.
Response times improved across multiple platforms, and the system remained stable even during frequent front-end changes. The automated sports betting system continued to perform well despite evolving platform restrictions.
The solution also scaled over time. As new sports and platforms were introduced into the client’s workflow, the extension continued to deliver reliable performance thanks to its maintainable design and structured update process.
Key Takeaways
Upon completing the project, we identified the following key takeaways:
- In fast-moving environments like online sports betting, sports betting software significantly improved efficiency and accuracy.
- Planning for constant change early on helped the system stay one step ahead of platform updates and anti-bot measures.
- Regular maintenance, whether updating broken selectors or fixing subtle bugs, was key to keeping the extension stable over time.
- Having a modular code structure allowed new platforms and features to be added without reworking the entire system.
- Even small improvements in betting bot development free up time and allow the client to focus on decision-making rather than manual data gathering.