This project aims to exercise the practices of automating functions that usually require significant manual effort. The project's goal is to access the page https://www.latimes.com/ and perform a search using any keyword provided at the system's input. From there, based on filters and a specific time period, the system will extract some of the information found in the generated results, such as title, description, and more. Once this is done, the system will analyze the collected data, download images, and save all the information in an .xlsx file.
- Get Robocorp Code -extension for VS Code.
- You'll get an easy-to-use side panel and powerful command-palette commands for running, debugging, code completion, docs, etc.
- Get RCC
- Use the command:
rcc run
🚀 After running the bot, check out the log.html under the output -folder. To facilitate the process, you can install the Open in Browser plugin to accelerate the viewing of the log.html file generated. The log.html file centralizes all the information related to the execution of the bot, such as variable values, errors, execution sequence, and more.
I recommend you getting familiar with adding your dependencies in conda.yaml to control your Python dependencies and the whole Python environment for your automation.
All the details related to the framework used on the solution of this bot can be found here rpaframework
This directory contains configuration files used by the bot. These settings include information on browser configuration, selectors, access url and data related to the resulting excel file
All enumeration classes are defined here
This directory is used to input data on the bot system. Robocorp utilizes this directory to store work-item input files.
In this directory is defined all system classes that are used to browse the page, capture results and also to apply data manipulation.
Directory structured by robocorp to encompass all related final result generated by the bot.