We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 7
Disaster Management Information System (DMIS)
Objectives o To overcome limitation of existing system.
Effective utilizations of natural resources database in event of disaster.
Building decision support system for better district administration
isaster and post-disaster at fingertips.
Providing vital information related to pre:
Facilitating users for easy data integration,
Editing, updating of spatial and non-spatial data at ease.
To assist in post disaster damage assessment analysis.
Provide centralized system that would be time & cost effective and maintenance
free.
Development of user friendly customized DMIS software.
"Disaster management" means a continuous and integrated process of planning,
organizing, coordinating and implementing measures which are necessary or
ation or reduction
expedient for prevention of danger or threat of any disaster, mi
of risk of any disaster or its severity or consequences, capacity-building,
preparedness to deal with any disaster, prompt response to any threatening disaster
situation or disaster, assessing the severity or magnitude of effects of any disaster.
evacuation, rescue and relief, rehabilitation and reconstruction. Disaster
Management comprises all forms of activities including structural and
nonstructural measures to avoid (i.e. prevention) or to limit (i.e, mitigation and
preparedness) adverse effects of disasters in the pre-disaster phase and post disaster
stage like Response, Relief, Recovery, & Reconstruction.
GIS is a powerful technology that can assist decision-making in all phases of the
disaster management cycle. GIS tools are used for integrating the geographic (i.e.Py.
location) and the associated attribute data pertaining to the location and its spatial
relationship with numerous other parameters, to carry out effective spatial
planning, minimize the possible damage, ensure immediate action when required
and prioritize actions for long-term risk reduction, Resources database on various
ig data has been compiled for all the
themes obtained through Remote Sei
arly attribute data on Demography &: Census,
districts of Maharashtra, Si
government core sectors, and past disaster have been integrated in the DMIS.
Spatial and non-spatial database has been generated in GIS environment. A
customized system has been developed for each district for prioritizing hazards for
use in developing Mitigation Strategies, Risk Estimation and Hazard and
Vulnerability Mapping. A user-friendly menu driven software has been developed
in Arc GIS using Arc Objects with Visual Basic 6.0. It has been designed and
customized keeping in mind the skill level of the expected users at the district
level. The methodology and database has been customized for easy
implementation.
Three phases of Disaster management.
Disaster Planning
Records managers are responsible for determining which records are vital to the
organization's operations. Because there are many types of disasters that could
affect organizational records, records managers should first identify important
business issues related to those vital records. The most obvious issues are fire,
flood and storms, though errors and omissions, negligence, employee sabotage,
computer terrorism, hacking and physical terrorism issues have risen to thehighest level of importance
Corporate records managers should gathet information about all information
hazards and risk, then identify vital electronic and hard-copy business records
and vital backup and recovery processes both onsite and offsite. They should
analyze and determine how the business could be affected if this information was
lost or damaged. The corporate records manager should then examine the
to prevent these hazards and reduce risk by addressing them
Disaster Management
During and immediately afer a disaster has occurred, written roles and
responsibilities become strategic. Even the best planning may not prevent
damage to vital records. Consequently, records managers must have records
mitigation plans in place for both'timely and economical responses to records
disasters. This way, they may salvage or replace damaged records and the
information they contain.
Disaster Recovery
Afier a disaster, organizations need to continue their operations. The
availability of critical disaster plan information is key to the continuation of
business operations, Records managers need to ensure that all responsible
managers and staff are familiar with the records disaster mitigation and recovery
program. They should document the policies, procedures, roles andEE
MO
responsibilities governing the records disaster mitigation and recovery program
in disaster recovery procedure manuals. These should clearly assign
responsibility for coordinating disaster recovery plans and activities for specific
Job functions and record series. Managers should also be authorized to designate
p 7
ther members of the disaster recovery team in a time of need.
The greatest problem with maintaining disaster plans has always been keeping
them up-to-date and reflective of actual work processes. They are worthless if
they are simply created and kept on a shelf. Recently, there have been examples
of plans that did not protect an organization or provide an avenue for recovery.
Energy explosions, confidential information hacking, disastrous storms and
floods and other recent disasters have demonstrated that disaster plans are
necessary for avoiding harm and financial loss. The records manager's role needs
to reflect the importance of this responsibility.
Securing the Web
Securing the Web Web servers are one of the many public faces of an
organization and one of the most easily targeted. Web servers represent an
interesting paradox namely, how do you share information about your organization
without giving away the so-called store? Solving this dilemma can be a tough and
thankless job; but it's also one of the most important.
1.Denial of service :The denial of service (DoS) attack is one of the real "old-
school" attacks that a server can face. The attack is very simple, and nowadays i1':carried out by thi script hho basical
‘ose individuals commonly known as script kidd
“vs s iddies, who ly
have a low skill level. In a nutshell, a DoS atta hone syst
1, a DoS attack is an attack in whic
W ie system
attacks another with the intent of consumi ing all the resources on the system (s\
Il the resources on the system (
s s
as bandwi
Iwidth or processor cycles), leaving nothing behind for le
imate reque:
2. Distributed denial of.
rvice: The distributed DoS (DDoS) attack is the big
brother of the Dos attack and as such is meaner and nastier. The goal of the DDoS
attack is to do the same thing as the DoS, but on a much grander and more
complex scale. In a DDoS attack, instead of one system attacking another, an
attacker uses multiple systems to target a server, and by multiple systems | mean
not hundreds or thousands, but more on the order of hundreds of thousands.
Some of the more common DDoS attacks include:
FTP bounce attacks: A File Transfer Protocol (FTP) bounce attack is
enacted when an attacker uploads a specially constructed file to a vulnerable FTP
server, which in turn forwards it to another location, which generally is another
server inside the organization. The file that is forwarded typically contains some
sort of payload designed to make the final server do something that the attacker
wants it to do.
Port scanning attack:A port scanning attack is performed through the
structured and systematic scanning ofa host. For example, someone may scan your
Web server with the intention of finding exposed services or other vulnerabilities
that can be exploited. This attack can be fairly easily performed with any one: ofa
er of port scanners available freely on the Internet. It also is one of the more
numb
dies attempt it
common types of attacks, as it is so simple to pull off that script kOE
cae
ust by dropping the host name or IP address of your server (however, they
ce don':t know how to interpret the results), advanced attacker will use port
Scanning to uncover information for a later effort,
Ping flooding attack. A ping flooding attack is a simple DDoS attack in
which a computer sends a packet (ping) to another system with the intention of
uncovering information about services or systems that are up or down. At the low
end, a ping flood can be used to uncover information covertly, but throttle up the
packets being sent to a target or victim so that now, the system will go offline or
ive, as a number
suffer slowdowns. This attack is "old school" but still very effect
of modem operating systems are still susceptible to this attack and can be taken
down.
Smurf attack. This attack is similar to the ping flood attack but with a clever
modification to the process. In a Smurf attack, a ping command is sent to an
intermediate network, where it is amplified and forwarded to the victim.
e around the
Web page defacement Web page defacement is seen from time to tim
Internet. As the name implies, a Web page defacement results when a Web server
is improperly configured, and an attacker uses this flawed configuration to modify
Web pages for any number of reasons, such as for fun or to push a political cause.
3.SQL injection: Structured Query Language (SQL) injections are attacks carried
out against databases. In this attack, an attacker uses weaknesses in the design of
the database or Web page to extract information or even manipulate information
within the database.
4.Poor coding: Anyone who has been a developer or worked in information
technology has seen the problems associated with sloppy or lazy coding practices.a 7
voor coding problems can result from any one of a humber of factors, including
poor tunings new developers, or insufficient quality assurance for an application
5-Shrink-wrapped code :This problem is Somewhat related to the above issues
with poor coding, but with a twist: Basically, this problem stems from the
convenience of obtaining Precompiled or pre-written Components that can be used
as building blocks for your own application, shortening your development cycle.
The downside is that the components you':re using to help build your application
may not have gone through the same vetting process as your in-house code, and
applications may have potential problem areas. Additionally, it':s not unheard of for
developers who don't teally know how to analyze the code and understand what it',
actually doing to put so-called "shrink-wrapped" components in applications.