Keep up with all of the essential KM news with a FREE subscription to KMWorld magazine. Find out more and subscribe today!

Trendy (and tried) ways to secure your knowledge

After developing a strategy, implementing procedures and establishing the technology infrastructure to support a KM initiative, how do you protect the valuable resources that are being captured and stored? Are you sure that the explicit knowledge that resides in the file, application, Web, E-mail and database servers is secure from potential disaster?

With weather-related disasters regularly striking various parts of the world, backing up and protecting your knowledge warehouse becomes vital. So what storage and backup trends are evolving?

With the rapid deployment of numerous application servers to meet user and corporate requirements, dedicated storage networks have emerged as a trend, according to Jonathan Greene, Computer Associates' (www.cai.com) director of storage product marketing. Centralized storage users can experience ease of administration, as well as an increase in overall performance and availability.


Although centralized storage has been around for quite some time with mainframes, taking it to the enterprise network is a new concept. While many people are familiar with local area networks (LANs) and wide area networks (WANs), storage area networks (SANs) are relatively new in enterprise data access and backup.

A SAN, which is connected via a fiber channel to a corporate network, is basically a dedicated sub-network consolidating all storage and backup devices. Instead of having multiple storage devices (hard drives, disk arrays, CD-ROM towers, etc.) attached to individual servers across the organization's campus, SANs bring all those devices into one centrally managed location.

Not only are SANs easier to administer, they make more efficient use of available storage space since multiple applications can reside on a single device. "Storage can no longer be managed in isolation or thought of as attached to a server," said Greene.

Decreasing window

Chris Ilg, senior product manager at Exabyte (www.exabyte.com), a manufacturer of tape backup solutions, said speed of the backup should be the primary issue for companies. "Customers are demanding faster backup methods more so than larger backups. There is a premium on the available time to do a backup," said Ilg. "Before there was an eight-hour window to do a backup. Now it is four to six hours."

That is true, according to Ilg, because more end-of-the-day processing is being done and the workday is longer. While the workday used to end at 5 p.m. or 5:30 p.m., it's now 7 p.m. or later. And the first staff members begin to arrive at 5 a.m. or 6 a.m., so the backup time has shrunk dramatically.

"The window for backup time is getting smaller and smaller when you consider the global market," added Ilg.

Rami Hyary, regional manager at Acuitive (www.acuitive.com), agreed with Ilg that speed is paramount. "Choose your backup devices according to speed, not capacity," he said. Hyary also warned that it is crucial to perform periodic restores of data to test the process.

Have you checked your backup today?

"It is not enough just to make sure the backup system didn't report any errors," said Hyary. "It is very important to remember that the purpose of doing a backup is to be able to restore the data if a problem occurs."

Hyary recalled a company doing a backup for months without checking to see if it was operating correctly. It was not until the client was upgrading the server's operating system that it discovered the backup tapes were no good. The backup software the company used was an older version that was just verifying the file header to check if the backup was successful, not the file itself.

As Hyary quickly found out when his company was called to help, a hardware failure had caused the transposing of data bits. The file size remained the same so the backup software did not detect a problem. As a result, the accounting department had to rekey three months of financial data.

Hyary suggested, "Regularly perform a routine of creating a file with known data in it. Then let your regular nightly backup procedure take place. The next morning, delete the file and wait one day or one week, then restore the file from tape. Did it work? Make sure you know from time of failure how long it will take you to make the data available to users."

Where to begin?

Many levels of protection and steps are needed to safeguard your valuable knowledge warehouse. CA's Greene suggests that you start by mapping critical resources and ask yourself the following questions:

What amount of downtime is acceptable? If your servers go down during the day, how much data can you afford to lose (since your last backup)? Can you reconstruct the lost information? How long would it take? What personnel resources are available in the remote sites?

By answering those questions, you are taking a big step in developing a solid backup plan.

Other methods

Besides backing up your data to media such as tape, other steps can be taken to ensure maximum uptime.

Although not a new storage trend, redundant array of inexpensive disks (RAID) has become a popular protection method for individual hard-drive failure. Instead of storing all your data on a single hard drive, a RAID configuration can store the same data in different locations on multiple hard disks (thus redundantly). The benefits are added security if a single hard drive fails and increased performance because reads and writes of data can occur simultaneously on multiple drives.

There are 10 configuration levels of RAID, each offering a different level of redundancy and performance. Typically, if one drive fails (such as in a RAID 5 setup), users won't notice a thing. The system keeps running as if nothing happened, and the network administrator is notified of a drive failure, which can be replaced after-hours.

Although RAID 5 is designed for individual hard-drive failure, what happens if there is another failure in the system, such as the network card or motherboard? How do you prevent that?

Enter the hot server. A hot server takes over if there's a hardware failure in the primary server. Similar to mirroring one hard drive to another, hot servers mirror the complete server--including data written to the drive in sub one-second intervals--to a backup server. So if the primary server goes down, the secondary server takes over, and you are back in business in seconds instead of hours or days. Companies that provide those types of solutions include Vinca (www.vinca.com) and Octopus (www.octopustech.com).

Another option is to take the storage services completely off-site. Even with a hot server, the secondary server may not help if damage was done within your server room by a disaster such as a storm. To aid in those situations, companies can create a separate location with duplicate data or turn to a third party that offers those services.

Back to basics--don't forget the paper!

Giff Salisbury, CEO of Commercial Archives (Buffalo, NY), has developed a disaster recovery program that he markets and uses in his own business. He has a 20-step program in which clients develop their own disaster recovery plans, or companies can use his services to help design a plan.

"Many times we think of only the computers in a company," Salisbury said, "but what about the paper documents?" Could your business recover if you lost all your hard-copy documents in a disaster? How long would it take? With the vast amount of information that can be captured and stored today, companies are becoming more dependent on those repositories. Massive databases and storage devices are growing exponentially and so is the need to provide a reliable backup and recovery plan.

Without the ability to replace the information you use to create knowledge, where would you be? Are you providing a safe haven from the next storm for your most valuable asset--knowledge?

KMWorld Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues