Welcome!

Article

Modern Computer Inventory Process

Network computer inventory is a complex process

What's all this in aid of? Computers are the implements in modern business, like lorry or lathe, for instance. As any other machines, all the company's PCs have to be managed. The system administrator has to install and update operating systems, replace the broken or out-of-date hardware with a new one, make decisions on purchasing necessary software and coordinate them with the company's manager and accounts department, audit the license keys availability, and so on. If the system administrator does not pay much attention to all the mentioned problems, this can lead to failures.

Imagine a network that consists of hundreds of computers. The head manager decides to purchase the software that is necessary for the company's operation. The company spends a lot of money on licenses, but when all needed documents are signed and license keys are received, a very unpleasant issue discovers. The most of the current operating systems on the network do not suit the requirements of the purchased software. It takes a ton of time and money to update or reinstall OS on a few hundred of computers. As a result, a part of the license period is wasted. If the head manager had the OS report before purchasing the expensive software, he would better evaluate the possibility of embedding it. Thus, the company would save a lot of money.

The computer inventory process cannot be quick, as all the collected data has to be documented carefully. Thanks to the new technologies, there are a lot of tools that help to make it easier and not so time consuming.

Such programs are easy-to-use. The system administrator just needs to install it on his computer and scan the network to create a list of all the company's computers. Then, the program polls each of them in order to collect all the data in one inventory database. The process is automated. The system administrator just needs to configure the program settings, for instance, the administrative rights on remote computers if the inventory data collecting is performed via WMI (Windows Management Instrumentation). The computer inventory program gives an opportunity to configure the poll's schedule, thus, the manager does not have to think about this problem as well.

A problem can arise if some company's computers are not on the network. When this happens, the system administrator can use alternative methods of the information gathering, for instance, using clients or agents that are usually included in the program package. In this case, the IT manager will have to spend some time installing and configuring clients on the remote PCs. All the rest work is performed by clients: they are launched automatically when users switch on their computers, and start gathering the inventory data. The system administrator just imports the data files to the main program's module. Thus, he gets the whole network inventory database in one place. He can analyze it, create any kind of reports and summary tables, and so on. This method seems rather time consuming, but in any way, it takes less time than the manual inventory method. This means that the system administrator can spend his time solving other network problems.

In general, it does not matter, what kind of service the company offers to its customers. But its computers are resources like other assets. It is easier to control them if you use a modern inventory program and perform the constant hardware and software audit. This measure helps to optimize not only the company's expenses, but the company management as well.

More Stories By Dmitriy Stepanov

Dmitriy Stepanov is a CEO at 10-Strike Software, Network Inventory, Network Monitoring, Bandwidth Monitoring Software developer. 10-Strike Software is a software developing company. They offer their high quality networking products since 1999. 10-Strike Software specializes in producing Windows network software for corporate users.

Latest Stories
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
Continuous testing helps bridge the gap between developing quickly and maintaining high quality products. But to implement continuous testing, CTOs must take a strategic approach to building a testing infrastructure and toolset that empowers their team to move fast. Download our guide to laying the groundwork for a scalable continuous testing strategy.
Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often adds complexity and increases costs. In his session at 18th Cloud Expo, Seth Oxenhorn, Vice President of Business Development & Alliances at FalconStor, discussed how a truly heterogeneous software-defined storage approach can add value to legacy platforms and heterogeneous environments. The result reduces complexity, significantly lowers cost, and provides IT organizations with improved efficienc...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discussed how businesses can gain an edge over competitors by empowering consumers to take control through IoT. He cited examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He also highlighted how IoT can revitalize and restore outdated business models, making them profitable ...
StackIQ has announced the release of Stacki 3.2. Stacki is an easy-to-use Linux server provisioning tool. Stacki 3.2 delivers new capabilities that simplify the automation and integration of site-specific requirements. StackIQ is the commercial entity behind this open source bare metal provisioning tool. Since the release of Stacki in June of 2015, the Stacki core team has been focused on making the Community Edition meet the needs of members of the community, adding features and value, while ...
The cloud competition for database hosts is fierce. How do you evaluate a cloud provider for your database platform? In his session at 18th Cloud Expo, Chris Presley, a Solutions Architect at Pythian, gave users a checklist of considerations when choosing a provider. Chris Presley is a Solutions Architect at Pythian. He loves order – making him a premier Microsoft SQL Server expert. Not only has he programmed and administered SQL Server, but he has also shared his expertise and passion with b...
With 15% of enterprises adopting a hybrid IT strategy, you need to set a plan to integrate hybrid cloud throughout your infrastructure. In his session at 18th Cloud Expo, Steven Dreher, Director of Solutions Architecture at Green House Data, discussed how to plan for shifting resource requirements, overcome challenges, and implement hybrid IT alongside your existing data center assets. Highlights included anticipating workload, cost and resource calculations, integrating services on both sides...
Big Data engines are powering a lot of service businesses right now. Data is collected from users from wearable technologies, web behaviors, purchase behavior as well as several arbitrary data points we’d never think of. The demand for faster and bigger engines to crunch and serve up the data to services is growing exponentially. You see a LOT of correlation between “Cloud” and “Big Data” but on Big Data and “Hybrid,” where hybrid hosting is the sanest approach to the Big Data Infrastructure pro...
"We are a well-established player in the application life cycle management market and we also have a very strong version control product," stated Flint Brenton, CEO of CollabNet,, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
We all know the latest numbers: Gartner, Inc. forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from last year, and will reach 20.8 billion by 2020. We're rapidly approaching a data production of 40 zettabytes a day – more than we can every physically store, and exabytes and yottabytes are just around the corner. For many that’s a good sign, as data has been proven to equal money – IF it’s ingested, integrated, and analyzed fast enough. Without real-ti...
In his session at 18th Cloud Expo, Sagi Brody, Chief Technology Officer at Webair Internet Development Inc., and Logan Best, Infrastructure & Network Engineer at Webair, focused on real world deployments of DDoS mitigation strategies in every layer of the network. He gave an overview of methods to prevent these attacks and best practices on how to provide protection in complex cloud platforms. He also outlined what we have found in our experience managing and running thousands of Linux and Unix ...
In his session at @DevOpsSummit at 19th Cloud Expo, Yoseph Reuveni, Director of Software Engineering at Jet.com, will discuss Jet.com's journey into containerizing Microsoft-based technologies like C# and F# into Docker. He will talk about lessons learned and challenges faced, the Mono framework tryout and how they deployed everything into Azure cloud. Yoseph Reuveni is a technology leader with unique experience developing and running high throughput (over 1M tps) distributed systems with extre...
"Operations is sort of the maturation of cloud utilization and the move to the cloud," explained Steve Anderson, Product Manager for BMC’s Cloud Lifecycle Management, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
I wanted to gather all of my Internet of Things (IOT) blogs into a single blog (that I could later use with my University of San Francisco (USF) Big Data “MBA” course). However as I started to pull these blogs together, I realized that my IOT discussion lacked a vision; it lacked an end point towards which an organization could drive their IOT envisioning, proof of value, app dev, data engineering and data science efforts. And I think that the IOT end point is really quite simple…