-
Notifications
You must be signed in to change notification settings - Fork 1
Description
I've started work on fleshing out the datamodel in a feature branch on my fork. I am basically going to ignore all the _generated models because they're trash.
Goals:
-
[Feature] Base Datamodels #6
The Datamodels are the root of all the frontend functionality of the API. They need to be designed to directly consume responses from Kintone. These are meant to be the raw attributes inherited by aninterfacethat then manipulates and utilizes these using theRouteendpoints defined inroutes.py -
[Feature] Fields #3
All of these are 'implemented' as of now, but interacting with them still requires dictionary syntax (YUCK!), Ideally we could build out a deeper model within eachFieldSubclass that gives the user a nicer interface. -
[Feature] Routes #4
Too many to list here, but every endpoint documented in the Kintone API should be implemented inRoutes. The response is what drives the Models, so we need to make sure the models match it exactly (or at least capture the parts we need). -
[Feature] Interfaces #5
Once all the drudgery of the Routing and Data implementation is taken care of, then we get to do the fun part of building the public interface! This will define all the primary methods that each object has and will link included values of any object to their respective interface.
The Root interface will beKintonewhich will then provide a way to accessUsersandApps. These will all be@propertyvalues with the property making a request to the API when accessed. The return value must be a boundModelthat inherits the routes of the object that requested it.
Timeline
My experience with plankapy says this could all be completed within a couple months or so. I have the one major downside of not being able to test any of this until you start messing with it in production. So please be loud and violent with your issues and code reviews!
Use Case
Once we have a general Python API for Kintone, it will be incredibly easy to start distributing thumbdrives to volunteers that can handle the whole process of data collection and inventory addition with a couple keystrokes.
Production and Security
The drive would only need to contain a .venv python environment and some scripts hidden in a .scripts folder. The primary interface would be executable batch files that trigger the script runs.
Handling API keys is a bit of a bear, but we could steal the auth token from a login session (A batch file that opens a browser, asks them to log in, then steals the cookie in the resonse?). If that doesn't work, just rolling the keys every few months or whenever a drive is lost would be good. There are also ways to encrypt thumbdrives, so we could encrypt them and just require the password when it is inserted. That way we could ship the keys without putting them out there in plaintext. Plus it would give us 10,000 or so years to roll the key after we find out a drive was lost.
Conclusion
This should have been multiple issues, dear god it's a novel.