![]() ![]() Once the query results are validated, they can be deployed either via the fleetctl tool or the UI to deploy them as packs directly from the manager. There is also a great tool called fleetctl created by the authors of Kolide fleet, to manage the manager configuration and for conversion of osquery packs. ![]() The Kolide fleet manager provides an intuitive UI to create and test the queries. Typically, in an enterprise environment one can enable osquery packs for specific needs (compliance, vulnerability management, monitoring, etc.), but I would recommend not hardcoding the osquery configuration using local configuration file and instead using Kolide fleet manager to deploy configuration and packs. Once Elasticsearch is set up with osquery and Filebeat running, it is easy to analyze the data being ingested into your Elasticsearch stack using Kibana. The dashboard provides visibility into the managed fleet Operating Systems (currently Darwin and Linux only), mount points and installed DEB packages. I chose the Filebeat method because it allows Kibana to display a pre-canned dashboard for an osquery pack named “it-compliance”. Another way is to use Filebeat directly to read the osquery results and ship the data to the ELK stack. There are a number of ways to ship the data that is collected by osquery to the ELK stack as described in Aggregating Logs. In fact, both for fleet manager and ELK stack, cloud deployment is the way to go, since this provides full visibility into users within the enterprise and those not connected to the enterprise network i.e. In a typical enterprise deployment, option (b) would be a good choice as it is a fully managed service, albeit at a higher cost. ![]() I chose option (c), but had to ensure that the machine had a static IP assigned to it. There are multiple ways to deploy the ELK stack a) It can be deployed as a docker container locally or in the cloud b) AWS provides a fully managed Elasticsearch service or c) Local Linux (CentOS or Ubuntu) machine with an ELK stack. I have a Kolide fleet manager setup in AWS cloud as described in my previous post Data Leak Visibility. Let us start with a high-level view of what my deployment looks like. This post will provide a high level view of what was done to connect the agents to the ELK stack and how I was able to not only analyze the data, but also to create very useful visualizations of the data from multiple sources. In the conclusion of my last post Data Leak Visibility, I mentioned that Elasticsearch, Logstash and Kibana (ELK) stack can be used to aggregate data from multiple osquery agents and can be used to analyze the data. ![]()
0 Comments
Leave a Reply. |