The features of Syscat
- Comprehensive
- It's designed explicitly to track everything, and it's multi-organisational by design. Extend your map of your own environment to include partners, ISPs, subsidiaries and service vendors, plus anything you know about the connections between them, all at the same level of detail that's available for your own organisation.
- Provides a real single source of truth, assuring full consistency.
- "Everything" means that it's theoretically capable of mapping the entire internet, including every last computer, spare part, network cable and person involved. Everything.
- User-extensible - if it's not included in the default installation, you can add it.
- Extensions are first-class elements, not tacked on afterwards.
- Share schema extensions with other users, to gain a shared view and vocabulary, and to avoid duplicated data-modelling work.
- You can extend it to cover new technologies or approaches, without waiting for the vendor to get around to it.
- API-first design - if you can do it in the GUI, you can already do it via the API.
- Automation-friendly: one of the design goals was to make it as easy as possible for users to build their own tooling around it.
- You can build your own GUI, if the provided one doesn't do what you need.
- IPAM API that takes care of all the details around subnet and address management
- IPv4/IPv6 dual-stack capable.
- Multiple VRFs.
- Multiple organisations.
- File upload/download API
- It can hold your reference documents, scanned contracts, and photos of the back of that router with the weird cabling.
- Distinguishes between intended and actual state, i.e. what should be there vs what's actually there
- You use different terms when allocating than when configuring. Syscat embraces this, instead of trying to pretend they're the same thing.
- Horizontally scalable
- Deploy as many Syscat instances as necessary, without additional configuration. Want a read-only instance for a batch-process to hammer, without slowing down the one that serves the web GUI? No problem.
- Because it's based on Neo4j, the database layer can also scale out horizontally, independently of the appserver layer.
- Variable levels of detail
- Record what information you do have, and flesh it out in more detail as you decide/discover more.
- Sometimes you just don't need fine-grained detail in order to get useful things done.
- Designed for production deployment
- Docker is the main deployment method, so it's easy to install and operate.
- Schemas are uploaded as JSON documents in text files, making it easy to manage them in a version-control system, and simplifying the dev
->
test->
staging->
production workflow.
- Versioned schemas
- Create a new schema-version at any time.
- Rollback (and roll-forward) between schema versions is trivial, making it easier to test changes and recover from mistakes.
- Old versions can be deleted, so you can remove cruft instead of accumulating it.
- Versioned data
- Every time you modify a resource (like this page) a new version is created.
- That means you can roll back to a previous version, or compare two versions.
- Note: this GUI does nothing with versions yet, but it's already there in the API.
- Separation of upgrades between the engine and the schema.
- When you deploy a new version of the Docker image, it won't automatically upgrade the schema, even if it includes a newer version.
- You can download and install a new version of the schema without having to upgrade the engine.
- This does mean that if you want to upgrade both the engine and the schema, you have to do both things as separate operations.
- No AI.
- It doesn't have any, and it isn't going to.
- It's here to extend the abilities of your mind (and the collective abilities of your team's minds). It's not here to replace them.
Ideas for new features are tracked as issues in the Syscat project on Codeberg.
Why doesn't it have built-in discovery/monitoring/insert-feature-here?
It's entirely passive with regard to data entry: all data has to be entered via the HTTP API, one way or another. This is a deliberate design decision because
- Data comes from a variety of sources, each with their own interfaces and their own take on the world, e.g. Active Directory synchronisation, network discovery tools, and human data-entry via GUIs.
- It's just not feasible to cover all possible data sources from within a single tool.
- It's simply more scalable to provide a common API that can be used by any discovery tool, then work at building them. Because the same interface is presented to customers and third-party vendors, they can fill their own gaps without having to wait for me, then share them with other users.
- There isn't a one-size-fits all approach anyway, especially for network discovery. Some environments are a good fit for a single, centralised service, others really need a distributed fleet of agents, and then there's the case for querying the AWS API to find out what's in there.
- Trying to fit all these capabilities into a single product leads to a bloated, complex thing that's increasingly hard to maintain and to get value out of. Better to have a simple core, plus a modular set of add-ons from which you can install those that serve the needs you actually have.
In the same vein, it doesn't initiate any actions in the outside world. I do plan to implement a webhook-style feature, where additions/changes/deletions of data will trigger an HTTP call to some other service, but that's somewhere in the future.