Network Management - Introduction
Computer networks span all levels of the stack from physical connections up to mobile and web-applications connecting networks of users.
Graph Databases offer a natural way of modelling, storing and querying all these types of computer networks.
A graph database like Neo4j can be utilized for:
-
Configuration Management
-
Impact Analysis
-
Planning
-
Security and Hardening of Networks
-
Intrusion Detection
-
Traffic Analytics
-
Analytics of user behavior
In this example we want to look at Network Management and Impact Analysis from the level of routing (TCP/IP) upwards to managing applications and tracing their dependencies.
Throughout the guide you’ll find Cypher statements that you can execute, by clicking on them and then executing them by hitting the run button.
Modeling
We can model the network endpoints (boxes like servers, routers, firewalls, racks) of the data center as nodes and the "cables" between them as relationships.
Another type of node represent networks and interfaces.
On the application level we have the operating system, virtual machines, application and services that are modeled as entities.
Our example data is already set-up, in the "resources" section at the end, you’ll find some pointers there.
DataCenter
This is the full data model of your graph.

If you want to see it yourself, run
call db.schema.visualization()
Imagine we have a DataCenter
connected to an Interconnect via an Egress Router
.
The datacenter uses a 10.x.x.x/8
IP address range.
The DataCenter consists of several Zones which are connected to the main backbone each via a Router
(10.zone.*/16).
From there each zone is broken down into rows of Racks
.
Each Rack
contains different types of Servers
and has its own Switch
to connect to the datacenter routers backplane.
Each Server
has external network Interfaces
that connect to the rack switch, the local networks being 10.zone.rack.*/24
.
Each machine either runs a real Operating System (OS
) or a Virtualization Manager that runs a number of Virtual Machines.
For operational simplicity we only run one Application
per OS which uses a number of Ports
on the external interface.
Usually we would get this kind of information from a configuration management database (CMDB), network management tools or agents installed on the machines.
Network Exploration: DataCenter and Zones
Let’s walk through the data, step by step. Let’s start with the DataCenter.
MATCH network = (dc:DataCenter {name:"DC1",location:"Iceland, Rekjavik"})
-[:CONTAINS]->(:Router)
-[:ROUTES]->(:Interface)
RETURN network;

The datacenter consists of 4 zones, each of which has its own separate Network
10.zone.*/16
, and it’s own Router
.
We can draw out that verbal description in a query with patterns matching the network parts.
MATCH (dc:DataCenter {name:"DC1"})-[:CONTAINS]->(re:Router:Egress)-[:ROUTES]->(rei:Interface)
MATCH (nr:Network:Zone)<-[:CONNECTS]-(rei)
// router in DC, connect it via an interface to the zone network
MATCH (dc)-[:CONTAINS]->(r:Router)-[:ROUTES]->(ri:Interface)-[:CONNECTS]->(nr)
RETURN *;
To visualize the DataCenter and its components so far, we can also start at the center and then go 3 hops out.
MATCH path = (dc:DataCenter)-[*3]-(:Network)
RETURN path;
We could also get statistical information, like the addresses of routers and interfaces in each network.
You can see very well how the graph representation in the match pattern resembles our domain model.
MATCH (r:Router)-[:ROUTES]->(ri:Interface)-[:CONNECTS]->(nr:Network)
WHERE r.zone IS NOT NULL
RETURN nr.ip as network_ip, ri.ip as router_if_ip, r.name as router, r.zone as zone;
╒════════════╤══════════════╤═════════╤══════╕ │"network_ip"│"router_if_ip"│"router" │"zone"│ ╞════════════╪══════════════╪═════════╪══════╡ │"10.1" │"10.1.0.254" │"DC1-R-1"│1 │ ├────────────┼──────────────┼─────────┼──────┤ │"10.2" │"10.2.0.254" │"DC1-R-2"│2 │ ├────────────┼──────────────┼─────────┼──────┤ │"10.3" │"10.3.0.254" │"DC1-R-3"│3 │ ├────────────┼──────────────┼─────────┼──────┤ │"10.4" │"10.4.0.254" │"DC1-R-4"│4 │ └────────────┴──────────────┴─────────┴──────┘
Network Exploration: Racks

Each zone contains 10 Racks
, each of which has it’s own Switch
and subnet with an IP like this pattern 10.zone.rack.*/24
.
MATCH (dc:DataCenter {name:"DC1"})-[:CONTAINS]->(rack:Rack)-[:HOLDS]->(s:Switch)-[:ROUTES]->(si:Interface)<-[:ROUTES]-(nr:Network:Zone)
RETURN *;
Now our network has grown quite a bit:
MATCH network = (dc:DataCenter)-[*6]-(:Rack)
RETURN network;
Network Connectivity
Now we could already have a look at the network connectivity in our datacenter.
If we look now at the overall connections we need to use shortest-paths which represents the most efficient route.
MATCH path = allShortestPaths( (rack:Rack)-[:HOLDS|ROUTES|CONNECTS*]-(router:Router:Egress) )
RETURN length(path) as hops, count(*) as count;
What happens if one of our cables gets loose or cut, i.e. the ROUTES
relationship between the switch’s interface and the network is gone.
Let’s cut the cable of this first switch.
MATCH (:Interface {ip:"10.1.1.254"})<-[rel:ROUTES]-(:Network)
DELETE rel
Connectivity after: 39 routes a 5 hops
MATCH path = allShortestPaths( (rack:Rack)-[:HOLDS|ROUTES|CONNECTS*]-(router:Router:Egress) )
RETURN length(path) as hops, count(*) as count;
Now all the machines in that Rack are cut off, no connection anymore, which we can demonstrate by trying to find the shortest path.
MATCH connection = allShortestPaths( (rack:Rack {name:"DC1-RCK-1-1"})-[:HOLDS|ROUTES|CONNECTS*]-(router:Router:Egress) )
RETURN connection;
How can we fix that? We could connect each switch to all the other three networks too, so we would survive the loss of 3 of those 4 connections.
// for all zones
MATCH (nr:Network:Zone)
// find *all* switches and their interface
MATCH (s:Switch)-[:ROUTES]->(si:Interface)
// connect them to all the zones, if not yet connected
MERGE (si)<-[:ROUTES]-(nr);
MATCH path = allShortestPaths((rack:Rack)-[:HOLDS|ROUTES|CONNECTS*]-(router:Router:Egress))
RETURN length(path) as hops, count(*) as count;
╒══════╤═══════╕ │"hops"│"count"│ ╞══════╪═══════╡ │5 │160 │ └──────┴───────┘

Cut the first cable of this first switch again.
MATCH (:Interface {ip:"10.1.1.254"})<-[rel:ROUTES]-(:Network)
WITH rel LIMIT 1
DELETE rel
But that Rack is now still connected with 3 alternative routes.
MATCH path = allShortestPaths((rack:Rack {zone:1,rack:1})-[:HOLDS|ROUTES|CONNECTS*]-(router:Router:Egress))
RETURN path;
╒══════╤═══════╕ │"hops"│"count"│ ╞══════╪═══════╡ │5 │3 │ └──────┴───────┘
Now let’s look at the servers in those racks.
Machine types
Similar to the machines you can rent on AWS we use machine types, for which we auto-created some reasonable capacities for CPU, RAM and DISK.
MATCH (t:Type)
RETURN t.name, t.id, t.cpu, t.ram, t.disk;
╒══════════════════╤══════╤═══════╤═══════╤════════╕ │"t.name" │"t.id"│"t.cpu"│"t.ram"│"t.disk"│ ╞══════════════════╪══════╪═══════╪═══════╪════════╡ │"xs-1/1/1" │0 │1 │1 │1 │ ├──────────────────┼──────┼───────┼───────┼────────┤ │"s-2/4/5" │1 │2 │4 │5 │ ├──────────────────┼──────┼───────┼───────┼────────┤ │"m-4/16/25" │2 │4 │16 │25 │ ├──────────────────┼──────┼───────┼───────┼────────┤ │"l-8/64/125" │3 │8 │64 │125 │ ├──────────────────┼──────┼───────┼───────┼────────┤ │"xl-16/256/625" │4 │16 │256 │625 │ ├──────────────────┼──────┼───────┼───────┼────────┤ │"xxl-32/1024/3125"│5 │32 │1024 │3125 │ └──────────────────┴──────┴───────┴───────┴────────┘
Machines
Each Rack contains 200 machines of the types we just introduced, so that in total we get 8000 servers in our datacenter.
As expected, the distribution of the types is inverse to their capabilities.
As the graph visualization of our full datacenter would be pretty but otherwise useless …


We’d rather look at the contents of a single rack
DC1-RCK-2-1
MATCH (r:Rack {name:"DC1-RCK-2-1"})-[:HOLDS]->(m:Machine),
(m)-[:ROUTES]->(i:Interface)-[:CONNECTS]->(si)<-[:ROUTES]-(s:Switch),
(m)-[:TYPE]->(type:Type)
RETURN *
or it’s stats.
DC1-RCK-2-1
MATCH (r:Rack {name:"DC1-RCK-2-1"})-[:HOLDS]->(m:Machine),
(m)-[:ROUTES]->(i:Interface)-[:CONNECTS]->(si)<-[:ROUTES]-(s:Switch),
(m)-[:TYPE]->(type:Type)
RETURN r.name as rack, si.ip as switchIp, properties(type) as type, count(m) as machines, min(i.ip) as minIp, max(i.ip) as maxIp
ORDER BY machines DESC;
╒═════════════╤════════════╤═══════════════════════════════════════════════════════════════════════════════════════╤══════════╤════════════╤════════════╕ │"rack" │"switchIp" │"type" │"machines"│"minIp" │"maxIp" │ ╞═════════════╪════════════╪═══════════════════════════════════════════════════════════════════════════════════════╪══════════╪════════════╪════════════╡ │"DC1-RCK-2-1"│"10.2.1.254"│{"disk":"5","name":"s-2/4/5","cpu":"2","id":"1","type":"s","ram":"4"} │"94" │"10.2.1.100"│"10.2.1.99" │ ├─────────────┼────────────┼───────────────────────────────────────────────────────────────────────────────────────┼──────────┼────────────┼────────────┤ │"DC1-RCK-2-1"│"10.2.1.254"│{"disk":"1","name":"xs-1/1/1","cpu":"1","id":"0","type":"xs","ram":"1"} │"52" │"10.2.1.1" │"10.2.1.9" │ ├─────────────┼────────────┼───────────────────────────────────────────────────────────────────────────────────────┼──────────┼────────────┼────────────┤ │"DC1-RCK-2-1"│"10.2.1.254"│{"disk":"25","name":"m-4/16/25","cpu":"4","id":"2","type":"m","ram":"16"} │"34" │"10.2.1.147"│"10.2.1.180"│ ├─────────────┼────────────┼───────────────────────────────────────────────────────────────────────────────────────┼──────────┼────────────┼────────────┤ │"DC1-RCK-2-1"│"10.2.1.254"│{"disk":"125","name":"l-8/64/125","cpu":"8","id":"3","type":"l","ram":"64"} │"13" │"10.2.1.181"│"10.2.1.193"│ ├─────────────┼────────────┼───────────────────────────────────────────────────────────────────────────────────────┼──────────┼────────────┼────────────┤ │"DC1-RCK-2-1"│"10.2.1.254"│{"disk":"625","name":"xl-16/256/625","cpu":"16","id":"4","type":"xl","ram":"256"} │"5" │"10.2.1.194"│"10.2.1.198"│ ├─────────────┼────────────┼───────────────────────────────────────────────────────────────────────────────────────┼──────────┼────────────┼────────────┤ │"DC1-RCK-2-1"│"10.2.1.254"│{"disk":"3125","name":"xxl-32/1024/3125","cpu":"32","id":"5","type":"xxl","ram":"1024"}│"2" │"10.2.1.199"│"10.2.1.200"│ └─────────────┴────────────┴───────────────────────────────────────────────────────────────────────────────────────┴──────────┴────────────┴────────────┘
We can also query for a distribution of machine types across the datacenter.
MATCH (r:Rack)-[:HOLDS]->(m:Machine)-[:TYPE]->(type:Type)
RETURN properties(type) as type, count(*) as c
ORDER BY c DESC;
╒══════════════════════════════════════════════════════════════════╤════╕ │"t" │"c" │ ╞══════════════════════════════════════════════════════════════════╪════╡ │{"disk":5,"name":"s-2/4/5","cpu":2,"id":1,"ram":4} │3760│ ├──────────────────────────────────────────────────────────────────┼────┤ │{"disk":1,"name":"xs-1/1/1","cpu":1,"id":0,"ram":1} │2080│ ├──────────────────────────────────────────────────────────────────┼────┤ │{"disk":25,"name":"m-4/16/25","cpu":4,"id":2,"ram":16} │1360│ ├──────────────────────────────────────────────────────────────────┼────┤ │{"disk":125,"name":"l-8/64/125","cpu":8,"id":3,"ram":64} │520 │ ├──────────────────────────────────────────────────────────────────┼────┤ │{"disk":625,"name":"xl-16/256/625","cpu":16,"id":4,"ram":256} │200 │ ├──────────────────────────────────────────────────────────────────┼────┤ │{"disk":3125,"name":"xxl-32/1024/3125","cpu":32,"id":5,"ram":1024}│80 │ └──────────────────────────────────────────────────────────────────┴────┘
Or if we treat our datacenter as a supercomputer, what’s the total amount of CPUs, RAM and disk available:
MATCH (m:Machine)-[:TYPE]->(type:Type)
RETURN count(*) as count, sum(type.cpu) as cpus, sum(type.ram) as ram, sum(type.disk) as disk;
╒═══════╤══════╤══════╤══════╕ │"count"│"cpus"│"ram" │"disk"│ ╞═══════╪══════╪══════╪══════╡ │8000 │24960 │205280│494880│ └───────┴──────┴──────┴──────┘
Software: Operating Systems and Applications
Bare-metal hardware is cool, but something has to run on it to make it useable.
Most likely it will be some kind of virtualization infrastructure that allows dynamic reallocation of the compute, memory and disk resources.
Because of the added complexity, we skip this for now.
For our software we differentiate between Operating Systems, Services and Applications
(which could also be micro services).
Each of them has a name, version(s) and dependencies.
In a more elaborate model we could also handle other resource requirements like RAM / CPU / DISK per running software instance.
Let’s look at our available operating systems.
MATCH (o:OS:Software)-[:VERSION]->(v)
OPTIONAL MATCH (v)<-[:PREVIOUS]-(vnext)
RETURN o.name as os, v.name as version, vnext.name as next_version
ORDER BY os, version;
Similar for our other software
MATCH (s:Software) WHERE not s:OS
OPTIONAL MATCH (s)-[:VERSION]->(v)
OPTIONAL MATCH (s)-[:DEPENDS_ON]->(dv)<-[:VERSION]-(d)
RETURN s.name, collect(v.name) as versions, [x IN collect([d.name,dv.name]) WHERE x[0] IS NOT NULL] as dependencies, s.ports;
Software: Running on Machines
Each of our machines is set up to run an OS and a single application, each of which might require other dependencies that are also installed.

MATCH (m:Machine) WHERE (m)-[:RUNS]->() AND rand() < 0.05 WITH m LIMIT 1
MATCH (m)-[r:RUNS]->(p:Process)-[i:INSTANCE]->(sv)
OPTIONAL MATCH (sv)<-[v:VERSION]-(sw)
RETURN *

Dependency Analysis
We could look at dependencies between data center elements on the physical level, like routers, switches and interfaces.
Another way to look at it is to determine dependencies between machines based on their internal and external connections.
But we can also use the software and its dependencies to determine bottlenecks and frequently dependent upon components.
Let’s look at all the software that uses Neo4j and the running Neo4j instances.
MATCH (s)-[:DEPENDS_ON]->(nv:Version)<-[:VERSION]-(n:Software:Service {name:"neo4j"})
MATCH (s)<-[:INSTANCE]-(sp)<-[:RUNS]-(sm:Machine)
MATCH (sp)-[DEPENDS_ON]->(np)-[:INSTANCE]->(nv)
MATCH (np)<-[:RUNS]-(nm:Machine)
RETURN sm as software_machine, sp as software_process, s as software, nv as neo_version,np as neo4j_process, np as neo_machine
LIMIT 10
Configuration Management
Proper IT infrastructures use a large number of configuration parameters to customize commodity hardware and software. To manage all of the variables, Configuration Management Databases (CMDBs) are used. Systems require certain variables, and can report what is currently configured so that the CMDB can detect issues, and send necessary updates.
In the past, CMDBs were mostly used for network, hardware and OS level configuration. Today, their use has expanded into services to support modern architectures. A number of related systems have popped up, such as ZooKeeper, Konsul, Eureka, and others.
Due to the variety of systems used for providing configuration to the infrastructure, it is very useful to create a unified, up to date view of the situation in your systems graph.
Upgrade OS Version and its Dependencies for a Version Range
We’re looking for machines in our Graph-CMDB whose Operating systems have to be updated.
The OS versions were linked in a list of :PREVIOUS
connections.
So we can easily determine if someone have an older than the expected version, even if version numbers are not sortable.
Those machine will be marked for an update to the correct version.
MATCH (os:OS:Software)-[:VERSION]->(newVersion) WHERE os.name = 'Debian' and newVersion.name = '8-Jessie'
MATCH (m:Machine)-[:RUNS]->(op:OS:Process)-[:INSTANCE]->(currentVersion)
WHERE (currentVersion)<-[:PREVIOUS*]-(newVersion)
// create update request
CREATE (m)-[:UPDATE_TO {ts:timestamp()}]->(newVersion)
All machines with UPDATE_TO
requests can be found by tools and operators.
MATCH (r:Rack)-[:HOLDS]->(m:Machine)-[:UPDATE_TO]->(vNew:Version)<-[:VERSION]-(os:OS:Software)
MATCH (m)-[:RUNS]->(:OS:Process)-[:INSTANCE]->(vCurr)
WHERE vCurr <> vNew
RETURN r.name, m.name, os.name, vCurr.name as currentVersion, vNew.name as newVersion
LIMIT 100;
When the local OS is physically updated, the old :OS:Process
will be stopped and the one will run.
MATCH (m:Machine)-[:UPDATE_TO]->(vNew:Version)<-[:VERSION]-(os:OS:Software)
MATCH (m)-[:RUNS]->(op:OS:Process)-[:INSTANCE]->(vCurr)
WHERE vCurr <> vNew
CREATE (m)-[:RUNS]->(opNew:OS:Process)-[:INSTANCE]->(vNew)
DETACH DELETE op;
After the physical update has been performed, the machines will report the now updated version and the update request can be removed.
MATCH (m:Machine)-[update:UPDATE_TO]->(v:Version)<-[:VERSION]-(os:OS:Software)
WHERE (m)-[:RUNS]->(:OS:Process)-[:INSTANCE]->(v)
DELETE update;
IT-Monitoring and Governance
Live network operations need to be supervised to ensure smooth operations, prevent bottlenecks, protect from attacks and vulnerabilities and allow maintenance planning and failure handling.
The information is either acquired by listening on network traffic and inferring running services and user and application activity combined with port-scans.
Alternatively, agents installed on the machines report the state of each server to the network or centralized databases which update the live state of the network.
Based on our existing model, those incoming messages and events can do the following:
-
Create new entries for Servers, Switches, Interfaces
-
Track running Services via used ports and traffic
-
Infer user and application activity and group by network segment, source, used service
-
Detect abnormal operations like attacks or potential bottlenecks and issue warnings
-
Track violations of rules, like isolation of the DMZ, certain firewall rules etc.
Here is an example of a new connection coming in and the graph being updated accordingly.
Subsequent information for that connection will be aggregated until it is closed, then the totals could be added to the general CONNECTIONS
relationship between the two IPs.
We could generate some events, by having processes from some machines accessing processes from other (random) Machines.
MATCH (m:Machine) WITH collect(m) as machines
WITH machines, size(machines) as len
UNWIND range(1,10) as idx
WITH machines[toInteger(rand()*len)] as source, machines[toInteger(rand()*len)] as target
MATCH (source)-[:ROUTES]->(si:Interface)-[:EXPOSES]->(sp:Port)<-[:LISTENS]-(sourceAppProcess)-[:INSTANCE]->(sourceApp)
WITH target, source,si,head(collect(sp)) as sp, sourceAppProcess,sourceApp
// todo limit to first port
MATCH (target)-[:ROUTES]->(ti:Interface)-[:EXPOSES]->(tp:Port)<-[:LISTENS]-(targetAppProcess)-[:INSTANCE]->(targetApp)
WITH source,si,sp, sourceAppProcess,sourceApp,target,ti,head(collect(tp)) as tp, targetAppProcess, targetApp
// todo limit to first port
RETURN {id: randomUUID(), type:"OpenConnection",source:{ip:si.ip, port:sp.port},target:{ip:ti.ip,port:tp.port},
connection: {source:sourceApp.name, target:targetApp.name, user: "user"+toString(toInteger(rand()*1000))+"@"+source.name,
time:timestamp(), packets: 1, mtu: 1500 }} as event
:param events:
[
{"source":{"ip":"10.1.7.100","port":11210},"id":"3e41d6f0-fdce-48f4-9bff-818359d8f0af","target":{"ip":"10.3.3.112","port":8080},
"connection":{"source":"couchbase","target":"webapp","user":"user436@DC1-RCK-1-7-M-100","time":1490540382971},"type":"OpenConnection",
"packets": 1, "mtu": 1500, "time": 1490904418539 },
{"source":{"ip":"10.1.4.91","port":7474},"id":"fed44be6-55f5-4e42-aab1-bebc5c818268","target":{"ip":"10.4.6.7","port":8080},
"connection":{"source":"neo4j","target":"webapp","user":"user911@DC1-RCK-1-4-M-91","time":1490540382971},"type":"OpenConnection",
"packets": 1, "mtu": 1500, "time": 1490904464824 }
]
UNWIND $events AS event
WITH event WHERE event.type = 'OpenConnection'
MERGE (si:Interface {ip:event.source.ip})
MERGE (si)-[:OPENS]->(sp:Port {port: event.source.port})
MERGE (ti:Interface {ip:event.target.ip})
MERGE (ti)-[:LISTENS]->(tp:Port {port:event.target.port})
CREATE (sp)<-[:FROM]-(c:Connection {id:event.id})–[:TO]->(tp)
SET c += event.connection // type, timestamp, user-info, ...
MERGE (si)-[cstats:CONNECTIONS]->(ti)
SET si.count = coalesce(si.count,0) + 1
SET si.packets = coalesce(si.packets,0) + event.packets
SET si.volume = coalesce(si.volume,0) + event.packets * event.mtu
All the information is aggregated in a live graph representation which is available for querying for alerts & notifications, dashboards, inventory summaries, reports and more.
Historic information can be stored as well as a timeline chain of changes attributed to cause. Both can be queried by operators to drill down into detailed analysis.
MATCH (si:Interface)-[:OPENS]->(sp:Port)<-[:FROM]-(c:Connection)–[:TO]->(tp:Port)<-[:LISTENS]-(ti:Interface)
WHERE c.type = 'OpenConnection'
RETURN si.ip as source, ti.ip as target, apoc.date.format(c.time,'ms','yyyy-MM-dd HH') as hour, count(distinct c) as count
ORDER BY hour ASC, count DESC
LIMIT 100;
Examples for graph based Network Management Solutions
A number of commercial solutions provide this kind of service, some of them are running Neo4j.
There are also open source solutions like Mercator from Lending Club and the Assimilation Project by Alan Robertson.
This real-time IT inventory information is also required for due diligence, e.g. for corporate investments, mergers or acquisitions.
Monitoring Use-Cases
Our graph contains both the static topological information and a lot of runtime information using the base topology. From the runtime data we can retrieve different metrics.
For instance, minimal, average and maximal runtimes of software instances per type
MATCH (v)<-[:INSTANCE]-(sp:Process)<-[:RUNS]-(sm:Machine)
MATCH (s:Software)-[:VERSION]->(v:Version)
WITH s.name as software, v.name as version, timestamp() - sp.startTime as runtime
RETURN software, version, count(*) as instances, { min: min(runtime), max: max(runtime), avg:avg(runtime) } as runtime
Resource Management Graph
If you use a resource manager like Apaoche Mesos (or DC/OS), Kubernetes etc. you specify for each piece of software you run not just name, version and dependencies but also resource requirements like cpu, ram, disk, ports and more.
A scheduler then takes the available resources of a configured machine cluster to schedule and allocate it’s resources to the needs and numbers of the required instances of software to run. It also takes care of health checks, and (re)starting / (re)scheduling and (re)routing of individual new or failed instances.
To model the resource graph of such a system is interesting to look at and reason about, especially if other requirements like indicated co-location or disk-reuse are taken into account.
References
-
WP: Graph Databases Solve Problems in Network and Data Center Management
-
Lending Club Engineering created a number of network management projects using Neo4j
-
MacGyver: DevOps Multi-Tool Repositories Slides
-
Mercator: produce graph model projections of infrastructure Repository
-
Article: Simplifying Virtualization Management with Graph Databases