CODATA 2002 Conference logo
Recherche, Science et Technologie Quebec
 

CODATA logo18th International Conference



CODATA 2002

Frontiers of Scientific and Technical Data

Montréal, Canada
29 September - 3 October

Biological Science Data

Program
Program at a Glance

Summary Table of Program

[printable PDF Version]


Detailed Conference Program

Forms and Information
Hotel Floor Plan [pdf]

Registration Form [pdf]
Hotel reservation [pdf]
Local Information [pdf]
Call for papers [pdf]

Abstracts
Keynote Speakers
Invited Cross-Cutting Themes

Workshops and Tutorials
Saturday, 28 September
Sunday, 29 September

Prize Award and Banquet
Conference Tours
Sponsors of CODATA 2002

T
o view PDF files, you must have Adobe Acrobat Reader.)

Science Specialty Session Abstracts
Physical Science Data

Biological Science Data
Earth and Environmental Data
Medical and Health Data
Behavioral and Social Science Data
Informatics and Technology

Data Science
Data Policy
Technical Demonstrations
Large Data Projects
Roundtables

Poster Sessions
 


Track I-C-2:
Integrated Science for Environmental Decision-making: The Challenge for Biodiversity and Ecosystems Informatics

Chairs: Gladys Cotter, U.S. Geological Survey, USA and
Bonnie Carroll, Information International Associates, USA

Introductory Context: From Local to Global: We will layout the intent and overview of the session, which is to explore the issues of turning data into a viable resource for decision-making through the development of biodiversity information infrastructures and systems. Particular emphasis will be placed on issues of obtaining, managing, accessing and using data that cross differing spatial and temporal scales. Challenges of integrating current electronic monitoring data with legacy data such as museum specimens for historical context will be addressed.

 


1. Building the US National Biological Information Infrastructure: Synergy between Regional and National Initiatives
John (Jack) Hill, Houston Advanced Research Center, USA

Information concerning biodiversity and ecosystems is critical to a wide range of scientific, educational, and government uses. However, the majority of this information is not easily accessible. In 1993, the National Research Council (NRC) published a report entitled "A Biological Survey for the Nation." The report recommended that the U.S. Department of the Interior oversee the development of a National Biotic Resource Information System. The resulting system should: 1) be a distributed federation of databases designed to make existing information more accessible, 2) develop new ways to collect and distributed data and information, as well as lead in promoting data standards, 3) support continuing state efforts to develop regional and statewide environmental databases, particularly with museums, universities and similar organizations, and 4) participate in interagency initiatives to coordinate the collection and management of biodiversity data by the federal government.

In 1994, the U.S. President signed Executive Order 12906, "Coordinating Geographic Data Acquisition and Access: the National Spatial Data Infrastructure (NSDI)." The NSDI deals with the acquisition, processing, storage, and distribution of geospatial data, and is implemented by the Federal Geographic Data Committee (FGDC). At the same time, the national biotic resource information system became the NBII (web page - http://www.nbii.gov). The NBII is implemented through the auspices of the U.S. Geological Survey (USGS). The NBII works with the FGDC to increase access and dissemination of biological geospatial data through the NBII and the NSDI. The NBII biological metadata standard, is an approved "profile" or extension of the FGDC's geospatial metadata standard.

In 1998, the Biodiversity and Ecosystems Panel of the President' s Committee of Advisors on Science And Technology (PCAST) released the report titled "Teaming With Life: Investing in Science to Understand and Use America's Living Capital". The PCAST report recommended that the federal government develop the "next generation NBII" or NBII-2. This would be accomplished through a system of nodes (interconnected entry points to the NBII). In 2001, the U.S. Congress allocated the funds for the development and promotion of the node based NBII-2.

Development and implementation of the NBII nodes is underway and is being conducted in collaboration with every sector of society. There are three types of nodes. "Regional" nodes have a geographic area of responsibility and represent a regional approach to local data, environmental issues, and data collectors. Twelve (12) regional nodes are required to cover the entire U.S. "Thematic" nodes focus on a particular biological issue (i.e., bird conservation, fisheries and aquatic resources, invasive species, urban biodiversity, wildlife disease/human health, etc.). Such issues cross regional, national, and even international boundaries. "Infrastructure" nodes are focused on issues such as the creation, adoption, and implementation of standards through the development of common tool suites, hardware and software protocols, and geospatial technologies to achieve interoperability and transparent retrieval across the entire NBII network.

This presentation will highlight NBII development, implementation, lessons learned, and successful user applications of two regional nodes, the Southern Appalachian Information Node (SAIN) and the Central Southwest/Gulf Coast Node (CSGCN). Specific NBII applications will include multiple country-, regional-, county-, and local- (site specific) level biological, environmental, and natural resource management issues.

 

2. Building a Biodiversity Information Network in India — Biodiversity Informatics and Developing World: Status and Potentials
Vishwas Chavan and S. Rajan, National Chemical Laboratory, India

The most of the striking feature of Earth is the existence of life, and the most striking feature of life is its diversity. Biodiversity, and the ecosystems that support it, contribute trillions of dollars to national and global economies. The basis of all efforts to effectively conserve biodiversity and natural ecosystems lies in efficient access to knowledgebase on biodiversity and ecosystems resources and processes. Most of the developed countries are well ahead in the race to take advantage of new electronic information opportunities to manage and build their biodiversity knowledge bases, the recognized cornerstone for their future economic, social and environmental well being.

For developing nations, which harbors rich and diversified natural resources, much of the biodiversity information is neither available nor accessible. Hence there is a need for organized, well-resourced, national approach to build and manage biodiversity information through collaborative efforts by this group of Third World Nations.

This paper reviews the state of information technology applications in the field of biodiversity informatics in these nations, with India as model nation. India is one of the 12 mea-biodiversity countries bestowed with rich floral and faunal diversity. With its deteriorating status of natural resources and developmental activities, India is one of the best model nation for such a review. Attempts made by the author's group to develop and implement cost-efficient, easy-to-use tools for biological data management are described in brief. Feasibility of employing available tools, techniques and standards for biological data acquisition, organization, analysis, modeling and forecasting has been discussed keeping in view the informatics awareness amongst the biologists and ecologists as well as planners. With specific reference to Indian biodiversity, authors suggest the framework to build national information infrastructure to correlate, analyze and communicate biological information to help these nations to generate sustainable wealth from nature.

 

3. Developing and Integrating Data Resources from a North American Perspective
Jorge Soberon, CONABIO, Mexico

Biodiversity Information denotes a very heterogenous set of data formats, updating regimes, quality, and users. The data in the labels of biological specimens provide a natural organizing framework because the georeference and the taxonomic name can be used to link to geographically organized data (remote sensing, cartography) and to a variety of points of view (ecological or genetical data, legislation, traffic, etc.). Label data, however is widely distributed over hundreds of institutions. In this talk, we describe the technical and organizational problems that were solved to create REMIB (the World Network of Biodiversity Information), that links nearly 5 million specimens from 61 collections of 16 institutions in three countries. We also give one example of the use that such system may have.



4. Ecological Informatics: a Long-Term Ecological Research Perspective
William Michener, Long Term Ecological Research Program, US
A

Scientists within the Long-Term Ecological Research (LTER) Network have provided leadership in ecological informatics since the inception of LTER in 1980. The success of LTER, where research projects span wide temporal and spatial scales, depends on the quality and longevity of the data collected. Scientists have devised data collection, data entry, data access, QA/QC and archiving strategies for ensuring that high quality data are appropriately managed to meet the needs of a broad user base for decades to come. The LTER cross-site Network Information System (NIS) is being developed to foster data sharing and collaboration among sites. Recent and important milestones for LTER include adoption of Ecological Metadata Language as a standard as well as supporting metadata software. Current and future foci include developing data standardization protocols and semantic mediation engines, both of which will facilitate LTER modeling efforts.


5. The Global Biodiversity Information Facility (GBIF) — Challenges and Opportunities from a Global Perspective
Guy Baillargeon, Agriculture and Agri-Food Canada

The Global Biodiversity Information Facility (GBIF) is a new international scientific cooperative project based on an agreement between countries, economies, and international organizations. The primary goal of GBIF is to establish an interoperable, distributed network of databases containing scientific biodiversity information in order to make the world's scientific biodiversity data freely available to all. GBIF will play a crucial role in promoting the standardization, digitization and global dissemination of the world's scientific biodiversity data within an appropriate framework for property rights and due attribution. Initially, GBIF will focus on species and specimen level data in 4 priority areas: data access and data interoperability; digitization of natural history collection data, electronic catalogue of names of known organisms; outreach and capacity building. With an expected staff of only 14, GBIF will work mostly with others in order to catalyse synergistic activities between participants, generate new investments and eliminate barriers to cooperation. In its first year of activity, GBIF has been concentrating on organisational logistics, staffing, and consultations with Scientific and Technical Advisory Groups (STAGs). Initial work plans are being drafted by the Science committee and its 4 subcommittees. Once functional, GBIF will allow to unlock and liberate vast amounts of biodiversity occurrence data for use in research and environmental decision-making. Life itself, in all its diversity (from molecules, to species, to ecosystems) will provide numerous new additional sets of data layers for integrated environmental analysis, modelling and forecasting.

 


Track III-C-2:
Proteome Database

Chair: Akira Tsugita, Proteomics Research Laboratory, Tsukuba, Japan

Proteomics research is growing broadly and exponentially. Such research includes: extraction of protein mixture from cells and tissues, separation and isolation of the proteins (by 2-DE, HPLC etc.), and identification of the protein (by terminal sequence, in-gel digestion-MALDI-TOF-MS, Capillary-LC/ESI-MS-MS, etc). This research has goals such as: 1) Establishment of a protein catalogue, a complete list of all distinct proteins which include post-translational modification and multiple spliced variant and cleavage products. This information corresponds to genome information; 2) Correlation to protein/protein interaction; 3) Correlation to protein/nucleic acid interaction; 4) Establishment of structure/active motif information; 5) Tissue-specific protein expression; 6) Age-specific protein expression; and 7) Intra-cellular protein expression. The proteome is now applied pharmacology and medicine. Recently, the international HUPO (human proteome organisation) was established and extremely active research has been carried out. While the genome sequence is uni-dimensional and finite, the proteome information is multi-dimensional with quasi-infinite dimensions. The proteome is dynamic and constantly changing in response to various environmental factors and signals. This session is devoted to the evaluation, compilation, and dissemination of such proteome data, and to a discussion of proteome information patenting.

1. A Proteomic Approach to the Study of Cancer
Julio E Celis, Institute of Cancer Biology, Danish Cancer Society and Danish Centre for Human Genome Research, Denmark

During the past 20 years, high resolution two dimensional polyacrylamide gel electrophoresis (2D PAGE) has been the technique of choice for analysing the protein composition of cell types, tissues and fluids, as well as for studying changes in protein expression profiles elicited by various effectors. The technique, which was originally described by O'Farrell and Klose, separates proteins both in terms of their isoelectric point (pI) and molecular weight. Usually, one chooses a condition of interest and lets the cell reveal the global protein behavioral response as all detected proteins can be analyzed both qualitatively (post translational modifications) and quantitatively (relative abundance, corregulated proteins) in relation to each other [http://biobase.dk/cgi bin/celis]. Presently, high resolution 2D PAGE provides the highest resolution for protein analysis and is a key technique in proteomics, an emerging area of research of the post-genomic era that deals with the global analysis of gene expression using a plethora of technology to resolve (2D PAGE), identify (mass spectrometry, Western immunoblotting, etc.), quantitate and characterize proteins, identify interacting partners as well as to store (comprehensive 2D PAGE databases), communicate and interlink protein and DNA mapping and sequence information from ongoing genome projects. Proteomics, together with genomics, cDNA arrays, phage antibody libraries and transgenic models belong to the armamentarium of technology comprising functional genomics. Here I will report on our efforts to apply proteomic technologies to the study of bladder cancer.

 

2. A Proposition of XML Format for Proteomics Database
Kenichi Kamijo, T. Yamazaki and A. Tsugita, Proteomics Reseach Center, NEC Corporation, Japan

We propose XML (eXtensible Markup Language) format for proteomics database to exchange proteome analysis data. The XML-based data is highly machine-readable and easy to represent information hierarchy and relationships. There have been several XML formats of proteome data which mainly represent the siquence information stored in the Protein Indentication Resource (PIR) and the Protein Data Base(PDB).

Our XML-based data model is a proteome-analysis-oriented structure and describes information of sample preparetion, 2D gel electrophoresis images, spot identification information in the gels and the sequence information of the spots. The model is used to exchange both of preparation parameters and the results of 2D gel electrophoresis analysis. It would be accelerated collaboration among proteomics researchers if a platform exchanging these data is developed on the internet.

By using our XML-based model for proteomics, we have developed web-based prototype system which consisits of XML database, agent, security and graphical user interface(GUI).

 

3. Proteomics : An Important Post-genomic Tool for Understanding Gene Function
Richard J. Simpson, L. M. Connolly, D. F. Frecklington, H. Ji, G. E. Reid, M. J. Layton, and R. L. Moritz, Joint ProteomicS Laboratory (JPSL), Ludwig Institute for Cancer Research and Walter & Eliza Hall Institute for Medical Research, Melbourne, Australia

If DNA is the blueprint to build the complex machine that is a human, then proteins are the parts of the machine that make it work. With the completion of the first draft of the DNA sequence that makes up the human genome, the challenge facing medical research now is to understand gene function. Proteomics provides a biological tool, or assay, for elucidating gene function.

While the term proteomics is often synonymous with high-throughput protein profiling of normal versus diseased tissue by 2-D gel analysis, this definition is very limiting. Increasingly, the power of proteomics is being recognized for its ability to unravel intricate protein-protein interactions associated with intracellular protein trafficking and signaling pathways (i.e., cell-mapping proteomics). The technology issues associated with expression proteomics (the study of global changes in protein expression) and cell-mapping proteomics (the systematic study of protein-protein interactions through the isolation of protein complexes) are almost identical and only differ in front-end scale-up processes. The application of proteomics for studying various biological problems will be presented with representative examples of (a) differential protein expression for identifying surrogate markers for colon cancer progression, (b) a non-2D gel approach for dissecting complex mixtures of membrane proteins, (c) proteins that inhibit cytokine signal transduction, (d) proteins that are involved in the intricate pathway that leads to programmed cell death (apoptosis).


4. Human Kidney Glomerulus Proteome and proposition of a method for native protein profiling
Akira Tsugita, K. Miyazaki, Y. Yoshida and T. Yamamoto, NEC Proteomics Reseach Center and Niigata Univ. Medical Faculty, Japan

To elucidate molecular mechanism of a chronic nephritis, the following proteome research of kidney glomeruli has been initiated. Pieces of cortex of kidney with normal appearance were obtained from patients underwent surgical nephrectomy due to renal tumor. Glumeruli preparation were carried out from the cortex by a standard sieving process using four sieves. The glomeruli on the 150 µm sieve were collected and further purified by picking up under a phase-contract microscopy. The glomeruli were spun down, homogenized in 2-DE lysis buffer and incubated.

2-DE was carried out from the glomeruli preparation in the standard method (25×20 cm) and about 1500 protein spots were separated. Identification of protein has been carried out by N-and-C-terminal sequencings and peptide mass fingerprinting with MALDI-TOF-MAS. 200 spots have been identified.

Besides, a new method has been developed to obtain native protein profiling. The first dimension is in liquid phase on an isoelectric chromato-focusing column and the second dimension is by non-polar chromatography and molecular sieving chromatography or a special designed reverse-phase chromatography.


Track III-D-2:
Genetic Data Issues

Chair: H. Sugawara

1. Genetic diversity in food legumes of Pakistan as revealed through characterization, evaluation and biochemical markers
Abdul Ghafoor and Asif Javaid, Plant Genetic Resources Institute, National Agricultural Research Center, Islamabad, Pakistan

Pakistan enjoys four distinguish seasons a year that enables to produce winter as well as summer legumes. Winter legumes consists of Chickpea (Cicer arietinum L.), lentils (Lens culinaris), peas (Pisum sativum), grass pea (Lathyrus sativus) and faba bean (Vicia faba), whereas summer legumes are mungbean (Vigna radiata), black gram (Vigna mungo), cowpea (Vigna unguiculata) and moth bean (Vigna oconotifolium). Common bean (Phaseolus vulgaris) is confined to high mountainous region of northern areas ranging the altitude 1000 to 2400 masl. These legumes have been collected and preserved in the gene bank for short duration (5-10 years) at 4 °C, medium term (15-20 years) at 0 °C and long term (more than 50 years) at -20 °C. The number preserved in the gene bank is 2065 (chickpea), 805 (lentil), 104 (peas), 100 (lathyrus), 101 (faba bean), 626 (mungbean), 646 (black gram), 199 (cowpea), 85 (moth bean) and 101 (common bean). About 80% of this germplasm has been characterized and evaluated for quantitative traits. Forty accessions of wild chickpea and one wild Vigna spp. have also been preserved.

The germplasm of black gram (250 accessions), mungbean (60 accessions), lentil (350 accessions), chickpea (350 accessions), wild chickpea (40 accessions), peas (104 accessions), cowpea (173 accessions) and wild Vigna spp. (one accession) have been evaluated for SDS-PAGE and except peas and wild chickpea, a low level of genetic diversity was observed for all the material evaluated. This situation lead to use of DNA markers, therefore 40 accessions of black gram and ten accessions of lentil were used for RAPD analysis that gave higher level of genetic diversity than SDS-PAGE. It was concluded that legume genetic resources should be characterised and evaluated along with biochemical analyses including protein and DNA markers for better gene bank management. This comprehensive data will lead to establishment of core collections. Either of the legumes mentioned above are mandate crop of one or other international centres except black gram and moth bean, although later is less important. Black gram has been identified a potential crop for most of Asian countries including India, Nepal, Bangladesh, Sri Lanka, Pakistan, Philippines, Thailand, Korea, Japan, Taiwan, China, etc. It is also recognized as an important crop in a part of African continent. Low genetic diversity coupled with low stability is a characteristic of this crop that could be minimized by developing a sound linkage between black gram growing countries and PGRI could serve as a regional gene bank for black gram preservation, evaluation and distribution of germplasm.


2. Visualization and Correction of Prokaryotic Taxonomy Using Techniques from Exploratory Data Analysis
T. G. Lilburn, American Type Culture Collection, USA
G. M. Garrity, Bergey’s Manual Trust and Department of Microbiology and Molecular Genetics, Michigan State University, USA

There are, at present, over 5,700 named prokaryotic species. There has long been a need to organize these species within a comprehensive taxonomy that relates each species to all the others. For some years, researchers have been sequencing the small subunit ribosomal RNA genes of many prokaryotes, initially to try and establish the evolutionary relationships among all prokaryotes and subsequently in order to aid in the identification of prokaryotes both known and unknown. These sequences have become an almost universal feature in the description of new species. Thus, for the purposes of classification, the sequences are probably the most useful, universally described characteristic of the prokaryotes. Small subunit rRNA gene sequences were used by the staff of the Bergey’s Manual Trust to establish prokaryotic taxonomy above the Family level only recently. This effort was facilitated by the application of techniques drawn from the field of exploratory data analysis to visualize the evolutionary relationships among large numbers of sequences and, hence, among the organisms they represent. We describe the techniques used to develop the first maps of sequence space and the techniques we are currently using to ease the placement of new organisms in the taxonomy and to uncover errors in the taxonomy or in sequence annotation. A key advantage of these techniques is that they allow us to see and use the complete data set of over 9,200 sequences. We also present plans for the development of a tool that will allow all interested researchers to participate in the maintenance and modification of the taxonomy.



3. Towards T-cell Epitope Design
Pandjassarame Kangueane, Meena K Sakharkar, Liew K. Meow, Nanyang Centre for Supercomputing and Visualisation, MPE, Nanyang Technological University, Singapore

Quantitative information on the types of inter-atomic interactions at the MHC-peptide interface will provide insights to backbone/sidechain atom preference during binding. Protein crystallographers have documented qualitative descriptions of such interactions in each complex. However, no comprehensive report is available to account for the common types of inter-atomic interactions in a set of MHC-peptide complexes characterized by MHC allele variation and peptide sequence diversity. The available x-ray crystallography data for MHC-peptide complexes in the Protein Databank (PDB) provides an opportunity to identify the prevalent types of inter-atomic interactions at the binding interface.

Two datasets, one consisting of 28 non-redundant class-I MHC-peptide complexes and another of 10 non-redundant class-II MHC-peptide complexes in the PDB were examined for inter-atomic interactions. Four types of such interactions namely - BB (backbone MHC - backbone peptide), SS (sidechain MHC - sidechain peptide), BS (backbone MHC - sidechain peptide) and SB (sidechain MHC - backbone peptide) characterize the MHC-peptide interface based on backbone and sidechain atom preference. We measured the percentage distribution of these interactions in a set of MHC-peptide complexes and identified the most common type among them.

We calculated the percentage distributions of four types of interactions at varying inter-atomic distances. The mean percentage distribution for these interactions and their standard deviation about the mean distribution is presented for each type. The prevalence of SS and SB interactions at the MHC-peptide interface is shown in this study. SB is clearly dominant at an inter-atomic distance of 3Å.

The prevalently dominant SB interaction at the interface suggests the importance of peptide backbone conformation during MHC-peptide binding. Currently available algorithms are well developed for protein side chain prediction upon fixed backbone templates. This study shows the preference of backbone atoms in MHC-peptide binding and hence emphasizes the need for accurate peptide backbone prediction in quantitative MHC-peptide binding calculations.



4. Intronless Genes in Eukaryotes
Meena Kishore Sakharkar and Pandjassarame Kangueane, Nanyang Technological University, Singapore

Eukaryotes have both intron-containing and intron-less genes and their proportion varies from species to species. Most eukaryotic genes are ‘‘multi exonic’’ with their gene structure being interrupted by introns. Introns account for a major proportion in many eukaryotic genomes. For example, the human genome is proposed to contain 24% introns and only 1.1% exons (Venter et al. 2001). Although most genes in eukaryotes contain introns, there are a substantial number of reports on intronless genes. We recently created a database (SEGE) for intronless genes in eukaryotes using GenBank 128 sequence data (http://intron.bic.nus.edu.sg/seg/). The eukaryotic subdivision files from GenBank were used to create a dataset containing entries that are reservedly considered as ‘‘single exonic’’ genes according to the ‘‘CDS’’ FEATURE convention. Single exon genes with prokaryotic architectures are of particular interest in gene evolution. Our analysis on this set of genes shows that structures are known for nearly 14% of their gene products. The characteristics and structural features of such proteins are discussed in this presentation.

Reference
Venter, C.J. et al. (2001) The sequence of the human genome. Science, 291, 1304-1351.


Track IV-A-2:
Biodiversity II


Chair: Ji Liqiang, Institute of Zoology, Chinese Academy of Sciences, China

1. Shell Biodiversity Using Animation Technology
Sung-Soo Hong, Hoseo University, Korea
Bu-Young Ahn, Kye-Jun Lee and Ji-Young Kim, Bio-Resources Informatics Department, Korea

The world’s natural history museums constitute an important storehouse of information about biodiversity. Although this information is regularly used for studies in systematic and natural history, its application to problems of importance to human well-being has been less frequent. Biodiversity is a new science that builds upon and combines the achievement at taxonomy, biology, biogeography, and ecology. It also draws on applied science such as conservation and natural resources management. A wide array at date types has been suggested as being relevant for biodiversity studies, ranging from molecular data to landuse data, early all of these data types can be structured around a core of 4 data elements : species, data, locality, and source, i.e. Theses data need to be digitized, cleaned-up, biogeography, and ecology. This paper is accompanied by a multimedia presentation of text, graphic, animation, virtual reality, and sound. This combination of data and its common visualization will provide a new insight about the interrelations among data. We developed a shell biodiversity using an animation technology (http://ruby.kisti.re.kr/~museumfs).

Cyber shell contents consists of five compartment including rare shells, marvelous shells, shell of the world, the shell of Korea and its story of shell. The database contains the pictures and related information of the shell. It implies not only animation display but also text information. The files of database were classified depending on the species, genus, family, order, and class and division of the shell. Pictures of shells are displayed and user may reach the image and virtual view information by clicking through the object displayed. This provides with various functions to multiplate, visualize and interact with image on the web. And every such transformations as translation, 360 degree rotation, and scaling can be applied in the picture interactively for the convenient and effective viewing. Information retrieval system using by corner transformation technique and multi-level grid file will be available for query search by future studies.



2. Building the Frog Contents System Using an Animation Technology
Sung-Soo Hong, Hoseo University, Korea
Bu-Young Ahn, Korea Institute of Science & Technology Information, Korea

In recent years, interest in surveying the biological resources of the country has increased greatly, with the goal of creating a national strategy to preserve biodiversity. Inventories and analyses of geographic, ecological, taxonomy and genetic diversity are key issues towards this goal. Frog dissection is mandatory part of biology or science courses offered in K-12 education and it is emphasized due to importance of the subject. Because of this, hundreds of thousands of frogs are dissected for the observation of their internal organs every year. This may not only result in environmental disruption but also has a risk of adversely affecting young students emotions as a side effect.

In the frog dissection system (http://ruby.kisti.re.kr/~museumfs), virtual dissection is enabled in order to eliminate these undesired effects and the factuality of organs is disguised using Photoshop to minimize the dislike of and aversion of students to the dissection process. In addition, the system was designed in such a way that, once a student replaces the dissected organs after observation is done, a frog is reanimated and jumps around so that the student does not treat the subject without care but instead treats it with respect for its life.



3. Biodiversity of Autotrophic Cryptogams in Antarctica
Asif Javaid, Abdul Ghafoor and Rashid Anwar, Plant Genetic Resources Institute, National Agricultural Research Center, Islamabad, Pakistan

Antarctica, the southernmost continent is a landmass of around 1.36 million square kilometers 98 percent covered by ice up to 4.7 kilometers thick. The continent remained neglected for decades after discovery, scientific research was initiated in early 1940s. Two species of phanerogams have been reported, whereas most of studies are carried out on cryptogams like algae, lichens and bryophytes. There are 700 species of terrestrial and aquatic algae in Antarctica, 250 lichens and 130 species of bryophytes including100 species of mosses and 25-30 species of liverworts. The species composition and abundance are controlled by many environmental variables, such as nutrients, availability of water and increased ultraviolet radiation resulting from the depletion of the ozone hole. These cryptogams can be found in almost all areas capable of supporting plant life in Antarctica and exhibit a number of adaptations to the Antarctic environment. There is a need to apply molecular and cellular techniques to study biodiversity and genetic characteristics of flora of this region. Biochemical techniques including DNA sequencing and microsatellite markers are being used to obtain information about the genetic structure of plant populations. These analyses are designed to assess levels of biodiversity and to provide information on the origin, evolutionary relationships and dispersal patterns. Flora of Antarctica needs to be genetically evaluated for the characters related to survival in that unique environment that can be incorporated into the economically important plants using transformation.


4. Automatic Mapping and Monitoring of Invasive Alien Plant Species, the South African Experience
J. M. K. Kandeh, J. L. Campos dos Santos and L. Kumar, International Institute for Geo-Information Science and Earth Observation, The Netherlands

Invasive alien plants are a huge problem in South Africa, affecting about 8.28% (10.1 million hectares of land) of the country. When converted to dense stands, this amounts to about 1.7 million hectares, and the problem is spreading rapidly. There is growing concern over the increasing rate at which the alien plants are replacing indigenous plants.

In response to the call of the convention on Biological diversity (UNEP, 1994), South Africa has over the years made efforts in compiling data on invasive alien plant species. A lot has been done in collating information on the distribution, abundance and habitat types of invasive alien plants, the role of biological agents in control of invasive alien plants, and modeling water use and spread of alien invasive plants.

Data on invasive alien plants in some part of the country are still weak and hence do not produce a comprehensive picture of alien plants invasion in the country. In the Greater St. Lucia Wetland Park of KwaZulu-Natal, the South African Government is implementing a mapping and control program on invasive alien plant species.

Control of invasive aliens species in the Wetland Park is also undertaken by a number of other organisations including private landowners, sugar cane farmers and forest plantation owners. There is lack of a standardised methodology with regards to data capture amongst the organisations. There are differences in data formats, map projections, little or no data exchange taking place, and most of the data on invasive alien plants held are not in computerized format. Consequently, there is very little information on the extent and distribution of invasive alien plants in the Greater St. Lucia Wetland Park.

This paper presents the development of a prototype geographic information system, which integrates data from various organisations in the Wetland Park. Integrating data from various organizations requires standardisation in data acquisition methodology, data representation and data management amongst the organisations.

In standardising data acquisition methodology, the methodology of Le Maitre and Versfed developed for mapping invasive alien plants at a 1:50,000 scale for a fynbos catchment management system was used, with the density classes grouped into four classes instead of seven without interfering with the class boundaries.

Using the Structured Systems Analysis Development Methodology, a prototype information system (APMIS) has been designed, tested and implemented. APMIS integrates data from various organisations in The Greater St. Lucia Wetland Park. APMIS is capable of providing geographic information on extent and distribution of invasive alien plants, assess eradication status of mapped areas, and provide operation maps of areas to be cleared. The APMIS strategy can be applied elsewhere where invasive alien plants are a problem and requires a coordinated approach both in mapping and control amongst all key players.

Keywords: Invasive Alien Plants, Geographic Information Systems, Biological Diversity, Systems Development Methodology



5. An Introduction of Chinese Biodiversity Information System
Ji Liqiang, Institute of Zoology, Chinese Academy of Sciences, China

Chinese Biodiversity Information System (CBIS) is a nation-wide distributed information system that collects, arranges, stores and disseminates data/information refers to biodiversity in China. It consists of a center system, 5 disciplinary divisions and dozens of data source. The Center System of CBIS is located in the Institute of Botany, Chinese Academy of Sciences, Beijing. The 5 divisions are Zoological Division (in Institute of Zoology, CAS, Beijing), Botanical Division (in Institute of Botany), Microbiological Division (in Institute of Microbiology, CAS, Beijing), Inland Wetland Biological Division (in Institute of Hydrobiology, CAS, Wuhan) and Marine Biological Division (in South China Sea Institute of Oceanology, CAS, Guangzhou). The data sources cover 15 institutes in CAS and includes botanical garden, field research station, museum, cell bank, seed bank, culture collection and research group. The Center System is response for building up and maintaining integrated and national-scale biodiversity database, environmental factor and vegetation database, model base and expert system in ecosystem level, and platform and tools of modeling and expert system. The Disciplinary Divisions are response for building up and maintaining database, model base and expert systems on their fields focused on data and information of species level. Data Sources are response for building up and maintaining database based on their local situation and disciplinary character, combining with GIS technology to present biodiversity information and data both in table and graphics.

82 databases have been set up in CBIS and been improved gradually, more than 590,000 records has been collected and inputted into CBIS database system, and most of them could be accessible from the Internet. They includes species inventory databases, endangered and protected species databases, ecosystem databases, specimen databases, botanical garden databases, culture collection database, cell bank database, economical species databases, etc.

In species inventory databases of animal, plant and microorganism, there are data of systematics, name, distribution, habitat and reference. In database of endangered and protected species, there are data of grade of protection, reason of endangered, measurement of protection, picture, etc. In database of specimen, there are data of collection, identification, storage and catalogue of species. CBIS recognizes the importance of metadata to data sharing and exchanging in its initial period and then sets up a series of standard of metadata in CBIS participating institutes. They include standard of metadata of dataset, data dictionary, metadata of institution and staff in CBIS. The metadata of dataset consists of 6 parts: information of dataset identity information, data collection, data management, data description, data accessing and metadata management. All databases of CBIS must be accompanied with a metadata file or table when they are put on the Internet or exchanged with other institutions.



6. Biodiversity Issues in Taiwan
Shang-Shyng Yang and Jong-Ching Su, National Committee for CODATA/Taiwan and Department of Agricultural Chemistry, National Taiwan University, Taiwan

In order to conserve and protect the very rich biological resources that have evolved in a unique natural environment, the government in Taiwan has set up a special committee and assigned a government agency, both at the cabinet level, to be in charge of planning and implementing relevant programs, respectively. Convening “Prospects of Biodiversity, Biodiversity-1999 and Biodiversity in the 21st Century” symposia has been the main means of building the national consensus to identify issues to be studied, which have motivated scientists to initiate the challenging task with the support of research funding from related agencies. There are 6 national parks, 18 nature reserves, 13 wildlife and 24 nature protection areas, totally covering 12.2% of the land area. The Policy Formulating Committee for Climate Changes has recommended the enforcement of education on biodiversity (including all levels of school and general public education), and formulated the working plans on the national biodiversity preservation and bioresources survey. The research programs in progress, supported by the national funding, include surveys on species, habitants, ecosystems and genetic diversities, long-term monitoring of diversity, sustainable bioresource utilization and compilation of flora of Taiwan. Increase in the number of scientific publications and increased emphasis placed by news media show the increased concern of both academic and public domains on biodiversity issue. Besides, the material and information databases related to the biological resources of various categories have been established and revised regularly. The following bioscience databases have been established in Taiwan: National plant genetic resources information system, Multimedia databank of Taiwan wildlife, Taiwan Agricultural Institute plant information system, Distribution and resources of fishes in Taiwan, Herbaria at many sites, Cell bank, Asian vegetable genetic resources and seeds, Database of pig production, Registry of pure-bred swine, Mating, furrowing, performance and transfer of ownership of pure-bred swine, Food marketing information system database, Food composition table in Taiwan, Database on heavy metals in Taiwan soils, Greenhouse gases emission from agriculture, Global change database generated in Taiwan.

Keywords: Biodiversity, national park, public education, bioscience, conservation policy, database

 


Track IV-B-2:
Bioinformatics


Chair: Takashi Kunisawa, Science University of Tokyo, Japan

Biologists are facing the challenge of organizing and integrating a vast amount of data and information, which are mainly produced by genome projects. This session focuses primarily on quality controls in sequence databases. Phylogenetic analyses of sequence data are also included in the scope.

1. Unweaving regulatory networks: automated extraction from literature and
statistical analysis

Andrey Rzhetsky, Columbia Genome Center, Columbia University, USA

In the first part of the talk I will describe our on-going effort to build a natural language processing system extracting information on interactions between genes and proteins from research articles. In the second part of the talk I will introduce an algorithm for predicting molecular networks from sequence data and stochastic models of birth of scale-free networks.



2. Genome rearrangements in the clinic and in evolution
David Sankoff, Centre de recherches mathematiques, Universite de Montreal, Canada

We analyze data on rearrangement breakpoints resulting from individual real-time cytogenetic events in order to help understand the distribution of multiple breakpoints in comparative maps. We compare breakpoint positions from four different databases, on reciprocal translocations, inversions and deletions in neoplasms, reciprocal translocations and inversions in families carrying rearrangements and the human-mouse comparative map. For each set of positions we construct breakpoint distributions for as many as possible of the the 44 autosomal arms. We identify and interpret four main types of distribution:

  1. The uniform distribution associated both with families carrying translocations or inversions, and with the comparative map,
  2. Telomerically skewed distributions of translocations or inversions detected consequent to births with malformations,
  3. Medially clustered distributions of translocation and deletion breakpoints in tumor karyotypes,
  4. Bimodal translocation breakpoint distributions for chromosome arms containing telomeric proto-oncogenes.

 

3. PIR Integrated Databases And Data-Mining Tools For Genomic And Proteomic Research
Zhang-Zhi Hu, Winona C. Barker and Cathy H. Wu, Protein Information Resource, National Biomedical Research Foundation, Georgetown University Medical Center, Washington, DC, USA

The human genome project has revolutionized the practice of biology and the future potential of medicine. With the accelerated accumulation of high-throughput genomic and proteomic data, computational approaches are increasingly important for deriving scientific knowledge and hypotheses.

As an integrated public resource of protein informatics, the Protein Information Resource (PIR) provides many databases and analytical tools to support genomic and proteomic research and scientific discovery. The Protein Sequence Database (PSD) is the major annotated protein database in the public domain, containing about 280,000 sequences covering the entire taxonomic range. To provide high quality annotation and promote database interoperability, the PIR uses rule-based and classification-driven procedures based on controlled vocabulary and accepted ontologies, and includes evidence attribution to distinguish experimentally determined from predicted protein features. PIR-NREF, a non-redundant database containing almost 1,000,000 proteins from PIR-PSD, Swiss-Prot, TrEMBL, GenPept, RefSeq, and PDB, provides a timely and comprehensive sequence collection with source attribution for protein identification, ontology development of protein names, and detection of annotation errors. The composite protein names in NREF, including synonyms and alternate names, and the bibliographic information from all underlying databases provide an invaluable knowledgebase for application of natural language processing or computational linguistics techniques to develop a protein name ontology. The iProClass database addresses the database interoperability issues arising from voluminous, heterogeneous, and distributed data. It provides comprehensive family relationships and functional and structural features for about 800,000 proteins in PIR-PSD, Swiss-Prot, and TrEMBL, with rich links to over 50 databases of protein families, functions, pathways, protein-protein interactions, post-translational modifications, structures, genomes, ontologies, literature, and taxonomy. The PIR databases are implemented in an object-relational database system and accessible online (http://pir.georgetown.edu) for exploration of proteins and their comparative analysis. It helps users to answer complex biological questions that may typically involve querying multiple sources and detect interesting relationships among protein sequences and groups.

The PIR is supported by the NIH grant P41 LM05798, iProClass is supported by the NSF grants DBI-9974855 and DBI-0138188, and the Protein Name Ontology project is supported by the NSF grant ITR-0205470.

 

4. Extraction of Phylogenetic Information from Gene Order Data
Takashi Kunisawa, Science University of Tokyo, Japan

Molecular phylogeny is frequently inferred from comparisons of nucleic or amino acid sequences of a single gene or protein family from different organisms. It is now known that there are a number of difficulties with this approach, for instance, correct alignment of sequence data, biased base (or amino acid) compositions among species, rate variation among sites and/or species, mutational saturation, and long-branch attraction artifact. Thus, development of new methods that can produce a reliable phylogenetic tree is an important issue. Here we present a simple method of reconstructing branching orders among genomes based on gene transpositions. We demonstrate that the occurrence or absence of a gene transposition event could provide empirical evidence for branching orders, being in contrast to the phenetic approaches of overall similarity or minimum distance. This approach is applied to evolutionary relationships among the completely sequenced Gram-positive bacteria. The complete genomic sequence data allow one to search for the target gene transpositions at a comprehensive level.



Earth and Environmental Data


Track I-C-3:
Frameworks for Sharing Geographic Data

Chair: Michael Goodchild, National Center for Geographic Information and Analysis and University of California, USA

This session reviews emerging technological and institutional models for widespread sharing of geographic data within and among large numbers of scientists and other users of geographic information. The frameworks described are complementary to each other. Individually and together they will facilitate expanded access and ease of use of geographic data across diverse and numerous scientific disciplines.

Among the framework initiatives to be addressed include:

  1. The National Map,
  2. Geospatial One-Stop,
  3. The Geography Network, and
  4. Frameworks for Sustainability of GIS Development in Low Income Countries.

From a U.S. perspective, the first three of these initiatives are all being developed within the standards and interoperability context of the U.S. National Spatial Data Infrastructure (NSDI). From a global perspective, these spatial database sharing efforts as well as those from many other nations are being developed within the context of the Global Spatial Data Infrastructure (GSDI) initiative.

1. Frameworks for Sustainability of GIS Development in Low Income Countries
Gilberto Camara, Director of Earth Observation, INPE, Brazil

This presentation discusses the development of Geographic Information System (GIS) software and technological approaches pursued in Brazil. Issues encountered in sustaining a complex technology in a large low income country (LIC) are outlined. In the process of describing the Brazilian experience, the prevalent assumption that LICs do not possess the complex technical and human resources required to develop and support GIS and similar technologies is challenged. Challenges, benefits and drawbacks of developing GIS software capabilities locally are examined and a number of important applications where local technology development has contributed to better understanding and cost-effective solutions are highlighted. Finally, some of the potential long-term benefits of a "learning-by-doing" approach and how other countries might benefit from the Brazilian experience are discussed.

 

2. The Geography Network
Clint Brown, ESRI, USA

Many now see the Internet as the most effective means of meeting the accelerating demand for geographically referenced information. Launched by ESRI in June, 2000, with the support of the National Geographic Society and many data publishers (EarthSat, GDT, WRI, US EPA, Tele Atlas, Space Imaging, etc.) the Geography Network <www.geographynetwork.com>, is a global collaborative and multi-participant network of geographic information users and providers including government agencies, commercial organizations, data publishers, and service providers, who use the Internet to share, publish, and use geographically referenced information. The Geography Network can be thought of as a large online library of distributed GIS information available to everyone. Users consult the Geography Network catalog, a searchable index of all information and services available to Geography Network users. A wide spectrum of simple to advanced GIS and visualization software technologies and online tools allow defining areas of interest, searching for specific geographic content, and can guide users to mapping services. Using any Internet browser, they access data that are physically located on servers around the globe, and can connect one or more sites at the same time. They can use digital map overlay and visualization, and combine and analyze many types of data from different sources. These data can be provided immediately to browsers or to desktop GIS software. Thousands of data layers are already available and Geography Network content is constantly increasing. Much of the content is accessible for free. Commercial content is also provided and maintained by its owners. Viewing or downloading of commercial content, or using commercial services, is charged in the Geography Network's e-commerce system. Becoming a provider is free and simple to do. The Geography Network uses open GIS standards and communication protocols, and serves as a test bed for data providers and the Open GIS Consortium. This presentation will show how the system works, explain the facilities provided, indicate the range of providers, describe the genesis of the system and its progress, and discuss future plans and directions.

 

3. Geospatial Information One-Stop
M. Robinson, Federal Geographic Data Committee, USA

The Geospatial One-Stop is part of a Presidential Initiative to improve effectiveness, efficiency, and customer service throughout the U.S. Federal Government. It builds upon the National Spatial Data Infrastructure (NSDI) and will accelerate its development and implementation. Geospatial One-Stop is classified as a Government-to-Government (G2G) project because it will focus on sharing and integrating Federal, State, local, and tribal data, and enable more effective management of government business. The vision is to spatially enable the delivery of government services.

The goals of Geospatial Information One Stop include providing fast, low-cost reliable access to Geospatial Data for government operations, facilitating G2G interactions needed for vertical missions such as Homeland Security, supporting the alignment of roles, responsibilities and resources, and establishing a methodology for obtaining multi-sector input for coordinating, developing and implementing geographic (data and service) information standards to create the consistency needed for interoperability and to stimulate market development of tools

The five major tasks identified in the Project Plan are: 1. Develop and implement data standards for NSDI Framework Data. 2. Fulfill and maintain an operational inventory (based on standardized documentation, using FGDC Metadata Standard) of NSDI Framework Data from Federal agencies, and publish the metadata records in the NSDI Clearinghouse network. 3. Publish metadata of planned acquisition and update activities for NSDI Framework Data from Federal agencies in the NSDI Clearinghouse network. 4. Prototype and deploy data access and web mapping services for NSDI Framework Data from Federal agencies. 5. Establish a comprehensive Federal portal to the resources described in the first four components (standards, priority data, planning information, and products and services), as a logical extension to the NSDI Clearinghouse network.

 

4. The National Map - Sharing Geospatial Data in the 21st Century
Barbara J. Ryan, U.S. Geological Survey, Reston, Virginia, USA

Over the last century, the United States has invested on the order of $1.6 billion and 33 million person hours in the standard (1:24,000 scale) topographic map series. These maps and associated digital data are the country's most extensive geospatial data infrastructure. They are also the only coast-to-coast, border-to-border coverage of our Nation's critical infrastructure - highways, bridges, dams, power plants, airports, etc. It is, however, an asset that is becoming increasingly outdated. These maps range in age from one year, those that were updated last year, to 57 years, those that have never been updated. The average age of these 55,000 maps is 23 years.

In January 2001, the Department of Interior's U.S. Geological Survey (USGS) undertook a decadal effort to transform the largely paper series to an online, seamless, integrated database known as The National Map. Extensive partnerships with local and State governments, other federal agencies, non-governmental organizations, universities and the private sector are being forged to construct The National Map. It is not a just a "federal" map, it is a "national" map -- an important distinction allowing greater leveraging of limited resources in order to fulfill the geospatial community's goal of "collect once, use many times."

These maps and related data touch, if not underpin, many sectors of the economy including the housing and development industry, agriculture, transportation, recreation, and emergency preparedness. After September 11th, the USGS provided more than 120,000 maps, hundreds of Landsat images and digital data files to assist with disaster planning, prevention, mitigation, and response efforts conducted at the local, State, and federal levels.

Coordination and standards-development mechanisms like the President's Geospatial One-Stop initiative, the Federal Geographic Data Committee, the Office of Management and Budget Circular A-16, and State-based geographic information consortia both advance and strengthen the policy framework for sharing geospatial data and other information assets of governments. The National Map, much like topographic maps in the last century, is a physical manifestation, in fact a visualization of this policy framework.



Track IV-A-6:
Application 2D et 3D de systèmes SIG. Transopérabilité de gestion intégrée de bases à composantes cartographiques
(2D and 3D Applications of GIS Systems: Interoperability of Integrated Cartographic Database Management)


Chairs: Jacques Segoufin, Institut de Physique du Globe, Paris, France et
Alexei Novikov, National Technical University of Ukraine, Kiev Polytechnic Institute, Ukraine

Selon les pays on se base, pour assurer la projection des éléments de carte sur le plan des données de surface courbe de notre planète, sur le choix de systèmes de références.

Le choix d'un ellipsoïde de référence est variable et diverses recommandations sont disponibles. De la nature des types de projection dépendent également la qualité des correspondances entre cartes mondiales, régionales et locales.

La situation des données acquises à jour sont de plus en plus effectuée dans le système UTM84.

Quelques projets européens visent à assurer les passages et de transferts plus faciles des " données techniques " locales pour les retraduire dans UTM84.

Pour la présente session il est proposé d'aborder en particulier les problèmes suivants :

  • Aspect théoriques, problèmes de référentiels et de projection
  • Réajustement des données issues de grilles différentes
  • Intégration multiparamètres
  • Organisation en réseaux des éléments d'un SIG
  • Liaison des domaines continentaux océaniques
  • Etats des grands projets internationaux (UNESCO, IGN, Geological Surveys)

1. Application of methods of space-distributed systems modeling in ecology
M. Zgurovsky, A. Novikov, National Technical University of Ukraine, Kiev Polytechnic Institute

A review of the studies carried out at NTUU"KPI" and the Institute of Cybernetics of National Academy of Sciences of Ukraine is presented. Two-dimensional and three-dimensional equations of diffusion and heat - mass transfer are used as mathematical models. The models make it possible to take account of space distribution, structural non-uniformity and anomaly properties of physical processes of harmful impurities spreading in the atmosphere, open water pools and subsoil waters.

The considered processes are characterized by substantial distribution in space. Therefore, efficient methods of numerical solution of two- and three-dimensional model equations are presented.

The complexes of programs allowing to solve efficiently the problems of modeling, prognosis and estimation of ecological processes in various environment are given.

 

2. Une mission géographique et ethnopharmacologique sur les plantes toxiques de l'Ile Maurice
A. Fakim-Gurib, Université de l'Ile Maurice
P. van Brandt, Université catholique de Louvain Bruxelles Belgique

La région sud ouest de l'Océan Indien est une zone géographique privilégiée pour sa diversité biologique et elle est bien connue pour sa flore d'espèces endémiques. A l'île Maurice l'usage traditionnel est très commun mais beaucoup de plantes utilisées peuvent apporter des risques potentiels pour la santé.

Bien que certaines plantes soient connues pour contenir un grand nombre de composés biologiquement actifs aux nombreux effets bénéfiques pour l'homme et les animaux, certains de ces mêmes éléments devraient être sujet à des dosages car à cause de leur utilisation abusive, ils se sont avérés extrêmement toxiques et provoquent ainsi des effets néfastes à la santé. L'apparition de ces effets négatifs peuvent être très soudains ou prendre du temps pour se développer. Heureusement, il n'y a que relativement peu de plantes qui, lorsqu'elles sont ingérées causent des troubles dangereux pour l'organisme. Cependant des précautions préliminaires doivent être prises pour éviter des empoisonnements, en particulier chez les jeunes enfants. Par conséquent, une mission a été menée à l'île Maurice pour identifier ces plantes toxiques. Soixante neuf espèces ont été inventoriées comme étant potentiellement toxiques et comprennent des espèces depuis : Thevetia peruviana (Apocynacaeae) considérée comme extrêmement toxique, jusqu'à Dieffenbachia seguine (Araceae) considérée comme modérément à faiblement toxique. Il est à remarquer que deux plantes indigènes endémiques de la région sont aussi considérées comme ayant des propriétés toxiques : Cnestis glabra (Connaraceae) et Agauria salicifolia (Ericaceae).

Les résultats de cette mission illustrent les degrés différents de toxicité, les composants chimiques et leur effets.

Les effets bénéfiques à long terme de ces toxines ne doivent pas être sous estimés si l'on prend l'exemple de Taxus brevifolia qui a donné entre autre naissance au fameux Taxol.

Un autre aspect, qui mérite d'être pris en compte est le fait que le climat et les facteurs environnementaux ont une influence directe sur la phytochimie de la bio diversité florale locale.

 

3. Carte structurale de l'océan indien
J. Segoufin, Institut de Physique du Globe de Paris, France

Dans le cadre des activités de la CCGM (Commission de la Carte Géologique du Monde), sous la supervision de l'UNESCO, il a été décidé de créer un certain nombre de cartes géologiques, tectoniques, structurales englobant le domaine maritime pour lequel il y a maintenant beaucoup d'informations, la Commission pour la cartographie des fonds sous marins étant en charge de ce dernier domaine;

C'est ainsi, qu'il y a deux ans, il a été décidé d'éditer une carte structurale de l'océan indien.

Cette carte a pour but de faire la synthèse des connaissances sur cet océan, d'en montrer sa formation et son évolution à partir des données géophysiques recueillies par différents instituts. Cette carte a un but pédagogique de diffusion des connaissances et doit être diffusée dans les lycées, Collèges et Universités.

Après plusieurs essais, les limites géographiques de la carte ont été fixées de 0° à 155° E et de -71° S à 30° N. La carte sera éditée dans le système de projection de Mercator à une échelle de 1/10.000.000 ème

La carte structurale de l'océan indien est constituée de 4 feuilles :

Feuille1 : 80° E
  -30° S 30° N
Feuille2 : 80° E 155° E
  -30° S 30° N
Feuille3 0° 80° E 80° E
  -71° S -30° S
Feuille4 80° E 155° E
  -71° S -30° S

L'ensemble des donnée qui sont accessibles actuellement figurera sur cette carte :
les courbes bathymétriques, les anomalies magnétiques, l'âge de la croûte océanique, tiré des anomalies magnétiques, les épicentres des seismes, divisés en deux classes : magnitudes supérieures à 6 et magnitudes inférieures à 6, les failles transformantes et zones de fracture, l'épaisseur sédimentaire, les zones de subduction, les axes de dorsale, les volcans actifs, les astroblèmes, les sites de forages DSDP et ODP ayant atteints la croûte océanique, les monts et plateaux sous-marins, etc…

En complément de cette carte, il a été également décidé de sortir une feuille physiographique, calculée à partir de la grille de Sandwell et al. et couvrant l'ensemble de l'océan indien en une seule feuille.

La plus grande partie des difficultés rencontrées dans la création de la carte structurale, consiste à rendre cohérentes, dans un même format des données qui proviennent d'horizons divers, nécessistant la plupart du temps un traitement préalable.

Actuellement les feuilles 1, 2, 3 sont terminées, la feuille 4 est en cours.

Une présentation de l'ensemble de la carte est prévue à la réunion de l'EUG à Nice en Avril 2003.

Le but de ce travail est de diffuser l'état actuel des connaissances sur l'océan indien grâce à cette série de cartes, mais également d'un support informatique interactif (CDROM), sur lequel les différentes informations apparaîtront sous forme de couches superposées pouvant être ajoutées ou supprimées à la demande.

La fin de ces travaux est programmé pour 2004, date à laquelle les versions papier et digitale de la Carte Structurale de l'Océan Indien seront présentées à la réunion de l'IGC à Florence.

 

4. Passerelle d'information sur les collections, spécimens et observations biologiques (ICSOB)
Guy Baillargeon, Agriculture and Agri-Food Canada

La Passerelle ICSOB est un prototype de moteur de recherche et de cartographie spécialisé sur les données d'observation et les spécimens biologiques des collections d'histoire naturelle. ICSOB répertorie les données disponibles par l'intermédiaire de réseaux de biodiversité accessibles sur l'Internet par voie de requêtes distribuées tels que l'Analyste d'espèces (TSA), le Réseau mondial d'information sur la biodiversité (REMIB) ou le Réseau européen d'information sur les spécimens d'histoire naturelle (ENHSIN). De façon analogue aux moteurs de recherche (tels que Google ou Altavista) qui aident à localiser des documents hypertextes, ICSOB récolte des noms dans les collections distribuées sur les réseaux de l'Internet et connecte les usagers directement aux sources de données originales. Les enregistrements de données transitent directement des gestionnaires autorisés de données primaires aux usagers finaux en temps réel. En outre, les enregistrements pourvus de coordonnées géographiques (longitude, latitude) sont reportés dynamiquement sur une carte du monde dont chacun des points de distribution est directement relié aux données originales. La Passerelle ICSOB fourni un point d'accès à des millions d'enregistrements individuels en provenance de plusieurs réseaux de biodiversité distincts. ICSOB est pleinement intégré à la version multilingue du Système d'information taxonomique intégré (SITI) facilitant l'accès aux données soit par l'intermédiaire de noms communs, de noms scientifiques ou de synonymes.

 


Track III-C-3:
Earth and Environmental Data


Chair: Liu Chuang, Chinese Academy of Sciences, Beijing, China

1. Interactive Information System for Irrigation Management
Md Shahriar Pervez, International Water Management Institute, Sri Lanka
Mohammad Ahmadul Hoque, Surface Water Modelling Centre, Bangladesh

Irrigation management is a key to efficient and timely water distribution in canal command areas keeping in view the crop factors, and for irrigation management adequate and always updated information regarding the irrigation system is needed. This paper illustrates a GIS Tool for Irrigation Management which provides information interactively for decision making process. This Interactive Information System (IIS) has been developed to facilitate the operation and management of the command area development and to calculate the irrigation efficiency in the field level. At the basis of this development is geographic information systems (GIS) but gradually, this is being adapted to the kind of decision and management functions that lie at the heart of the planning process of any irrigation project. It also provides support to the design engineers to assess the impact of the design parameters of the System. This is an Arcview based GIS tool developed with the Avenue Codes by integrating the GIS and Relational Database Management System (RDMS). Effective integration of GIS with RDMS enhances performance evaluation and diagnostic analysis capabilities. For this application real time topographic data are required which stored as spatially distributed datasets, back end RDMS has been used to store related attribute information, it lets an Irrigation manager to do some real time calculation and analysis which covers

a) Drawing of Detailed Canal and Drainage system on the basis of their category along with other spatial layers
b) Cross section profile of the canal
c) Comparison of cross sections
d) Long profile of the canal
e) Cut and Fill calculation of a Cross section in respect of the designed cross section of that particular section
f) Convince Calculation of a Particular Section
g) Calculate Area elevation curve for command area or any drawn area
h) Affected areas for the failure of any irrigation structure
I) Retrieval of current Irrigation Structure's Information along with image
j) calculate the efficiency of the system.

Easy updating system of the associated database keeps the system always updated in respect of the real field situation. A very good user friendly Graphical User Interface at the front end helps the manager to operate the application easily. Using these "point on click" functions of this application an irrigation manager is capable to generate outputs in the form of Maps, Tables and Graphs which guide him to take prompt and appropriate decision with in few minutes.

 

2. Results of a Workshop on Scientific Data for Decision Making Toward Sustainable Development: Senegal River Basin Case Study
Paul F. Uhlir, U.S. National Committee for CODATA, National Research Council, USA
Abdoulaye Gaye, Senegalese National Committee for CODATA, Senegal
Julie Esanu, U.S. National Committee for CODATA, National Research Council, USA

Scientific databases relating to the environment, natural resources, and public health on the African continent are, for various reasons, difficult to create and manage effectively. Yet the creation of these and other types of databases-and their subsequent use to produce new information and knowledge for decision-makers-is essential to advancing scientific and technical progress in that region and to its sustainable development. The U.S. National Committee for CODATA collaborated with the Senegalese National CODATA Committee to convene a "Workshop on Scientific Data for Decision-Making Toward Sustainable Development: Senegal River Basin Case Study," which was be held on 11-15 March 2002, in Dakar, Senegal. The workshop examined multidisciplinary data sources and data handling in the West Africa region, using the Senegal River Basin as a case study, to determine how these data are or can be better used in decision making related to sustainable development. This presentation provides an overview of the workshop results and a summary of the published report.



3. Study on Spatial Databases of Chinese Ecosystems
Yue Yan-zhen, Chinese Academy of Sciences, China

The spatial databases construction of Chinese ecosystems is based on Chinese Ecosystem Research Network (CERN), Chinese Academy of Sciences (CAS). In order to meet the challenges of understanding and solving the issues of resources and environment at the regional or other larger scales, and with the support of Chinese Academy of Sciences, CERN started to be constructed in 1988. CERN consists of 35 ecological stations on agriculture, forest, grassland, lake and bay ecosystems, which produce a lot of data by monitoring and measurement every day. The quality of these data is control by 5 sub-centers of CERN, including water, soil, atmosphere, biological and aquatic sub-center. At last, all these enormous calibrated data including spatial data are collected in synthesis center.

We constructed the spatial databases to connect the enormous monitoring data with ecological spatial information. This study of the spatial databases includes:
1. Standard of spatial data classification
2. Structure of spatial databases
3. Functions of special databases
4. Management of special database
5. serving of net share
6. policy of data share

Key words: ecosystem network; Geographic Information System; Data Share

 

4. Development of the Global Map: National and Cross-National Coordination
Robert A. O'Neil, Natural Resources Canada, Ottawa, Canada

The Global Map is geospatial framework data of the Earth's land areas. This framework will be used to place environmental, economic and social data in its geographic context. The Global Map concept permits individual countries to determine how they will be represented in a global data base consisting of 8 layers of standardized data: administrative boundaries, drainage, transportation, population centres, elevation, land cover, land use and vegetation cover at a data density suitable for presentation at a scale of 1:1M. Usually it is the national mapping organizations that contribute data of their country to the Global Map, which is then made available at marginal or no cost.

At present, 94 nations have agreed to contribute information to the Global Map and an additional 42 are considering their participation. To date, coverage has been completed and is available for 11 countries.

While there is a wealth of source data available for this undertaking, not all nations have the capacity to evaluate the source data sets, make corrections and transform them into a contribution to the Global Map. A proposal to relax the specifications in order to hasten the completion of the Global Map will have to be balanced with the problems of dealing with heterogeneous databases, particularly in the integration, analysis and modeling.



Track III-D-4:
The Use of Artificial Intelligence and Telematics in Environmental and Earth Sciences

Jacques-Octave Dubois, France and
Alexei Gvishiani, Russia

New tools such as artificial intelligence algorithms are needed to effectively manage and process the vast amounts of environmental and earth science data.

Given that databases are increasingly widespread, telematics techniques (computer-based and telecommunications techniques) are needed to handle algorithms. In other words, to process this considerable amount of information, clustering algorithms must be adapted for and applied in computer networks.

The two books (Editions Codata, Springer) published by the two Co-chairs will be showcased at this session.

1. Application de l'Intelligence Artificielle et Télématique dans les Sciences de la Terre et de l'Environnement
Jacques-Octave Dubois, France
Alexei Gvishiani, Russia

Presentation of the the book : Artificial Intelligence and Dynamic Systems in Geophysical Applications. By A. Gvishiani and J.O. Dubois , Schmidt United Institute of Physics of the Earth RAS, CGDS and Institut de Physique du Globe de Paris.

This volume is the second of a two-volume series written by A. Gvishiani and J.O. Dubois.
The series presents the application of new artificial intelligence and dynamic systems techniques to geophysical data acqusition, management and studies. Most of the mathematical models, algorithms and tools presented were developed by the authors. The first volume of the series, published in 1998, is entitled "Dynamical Systems and Dynamic Classification Problems in Geophysical Applications." It is devoted to the application of dynamic systems, pattern recognition and finite vector classification with learning to a variety of geophysical problems.

The book "Artificial Intelligence" introduces geometrical clustering and fuzzy logic approaches to geophysical data analysis. A significant part of the volume is devoted to applying the artificial intelligence techniques introduced in volumes 1 and 2, to fields such as seismology, geodynamics, geoelectricity, geomagnetism, aeromagnetics, topography and bathymetry.

As in the first volume, this volume consists of two parts, describing complementary approaches to the analysis of natural systems. The first part, written by A. Gvishiani, deals with new ideas and methods in geometrical clustering and the fuzzy logic approach to geophysical data classification. It lays out the mathematical theory and formalized algorithms that form the basis for classification and clustering of the vector objects under consideration. It lays the foundation for the second part of this book which is the use of this classification in the study of dynamical systems.

The second part, written by J.O. Dubois, is concerned with various theoretical tools and their applications to modeling of natural systems using large geophysical data sets. Fractals and dynamic systems are used to analyse geomorphological (continental and marine), hydrological, bathymetrical, gravimetrical, seismological, geomagnetical and volcanological data.
In these applications chaos theory and the concept of self-organized criticality are used to describe the evolution of dynamic systems.

The first volume is devoted to the mathematical and algorithmical basis of the proposed artificial intelligence techniques; this volume presents a wide range of applications of those techniques to geophysical data processing and research problems. At the same time it presents a reader with another algorithmic approach based on fuzzy logic and geometrical illumination models.

Many readers will be interested in the two volumes (vol.1, J.O. Dubois, A. Gvishiani "Dynamic Systems and Dynamic Classification Problems in Geophysical Applications" and the present vol.2, A. Gvishiani, J.O. Dubois "Artificial Intelligence and Dynamic Systems in Geophysical Applications") as a package.

 

2. The Environmental Scenario Generator (ESG) a Distributed Environmental Data Mining Tool
Eric A. Kihn, NOAA/NGDC, Boulder, CO, USA
Dr. Mikhail Zhizhin, RAS/CGDS, Moscow, Russia

The Environmental Scenario Generator (ESG) is a network distributed software system designed to allow a user running a simulation to intelligently access distributed environmental data archives for inclusion and integration with model runs. The ESG is built to solve several key problems for the modeler. The first is to provide access to an intelligent ?data mining? tool so that key environmental data can not only be retrieved and visualized but in addition, user defined conditions can be searched for and discovered. As an example, a user modeling a hurricane?s landfall might want to model the result of an extreme rain event prior to the hurricane?s arrival. Without a tool such as ESG the simulation coordinator would be required to know:

  • For my region what constitutes an extreme rain?
  • How can I find an example in the real data of when such an event occurred?
  • What about temporal or spatial variations to my scenario such as the finding the wettest week, month or year?

If we consider combining these questions across multiple parameters, such as temperature, pressure, wind speed, etc. and then add multiple regions and seasons the problem reveals itself to be quite daunting.

The second hurdle facing a modeler who wants to include real environmental effects in the simulation is how to manage many discreet data sources. Often simulation runs face tight time deadlines and lack the manpower necessary to retrieve data from across the network, reformat it for ingest, regrid or resample it to fit the simulation parameters, then incorporate it in model runs. Even if this could be accomplished what confidence can the modeler have in the different data sources and their applicability to the current simulation without becoming expert in each data type? The unfortunate side effect of this is that the environment is often forgotten in simulations or a single environmental database is created and ?canned? to be replayed again and again in the simulation.

The ESG solves this problem for the modeler by providing a 100% Java platform independent client with access to both data mining and database creation capabilities on a network distributed parallel computer cluster with the ability to perform fuzzy logic based searching on an global array of environmental parameters. By providing intelligent instantaneous access to real data it ensures that the modeler is able to include realistic, reliable and detailed environments in their simulation applications.

This demonstration will present the results of data-mining, visualization, and a domain integration tool developed in a network distributed fashion and applied to environmental modeling.

 

3. Satellite Imagery As a Multi-Disciplinary Tool for Environmental Applications
Herbert W. Kroehl, World Data Center for Solar-Terrestrial Physics, National Geophysical Data Center, USA
Eric A. Kihn, NOAA/NGDC, USA
Alexei Gvishiani, RAS/CGDS, Russia
Mikhail N. Zhizhin, RAS/CGDS, Russia

Satellite technologies offer a unique opportunity to monitor the earth and its environment. Environmental satellite data, which initially focussed on “in situ” measurements of the ambient environment, are taking advantage of remote sensing technology through the use of imagers and sounders. Visible, infrared, microwave and ultraviolet emissions are now recorded across a swath as large as 3,000 km by instruments on operational meteorological and earth observing satellites. The resulting radiances are used to compute a disparate set of parameters serving very different scientific disciplines, e.g. space physics and sociology.

What environmental parameters are routinely computed from imagery and soundings recorded on satellites? The imagery on operational weather satellites are used to monitor clouds, snow, ice and solar activity and to construct profiles of atmospheric temperature, humidity and ozone. The same images were found to be useful in assessing the state of the environment, detecting wildfires, tracking the flow of ash from volcanoes, and assessing population dynamics. In addition to improvements for operational instruments, imagers on earth observing systems are used to assess environmental health, classify vegetation, to assess the effects of natural hazards, and to build digital elevation models.

But when the same data are used for many different applications, one scientist’s signal becomes another scientist’s noise, and it becomes important to classify different environmental signals contained in an image. In addition, data mining techniques need automatic classification of images, especially when these images are so voluminous.

A sample of the diverse use of images recorded on weather and earth observing satellites will be presented as a prelude to the need for mathematical techniques to classify information contained in the images.



4. Development of the Space Physics Interactive Data Resource- II (SPIDR II) Experiences Working in a Virtual Laboratory Environment
Eric A. Kihn, NOAA/NGDC, USA
Dr. Mikhail Zhizhin, RAS/CGDS, Russia
Prof. Alexei Gvishiani, RAS/CGDS, Russia
Dr. Herbert W. Kroehl, NOAA/NGDC, USA

SPIDR 2 is a distributed resource for accessing space physics data which was designed and constructed jointly at NGDC and CGDS to support requirements of the Global Observation and Information Network (GOIN) project. SPIDR is designed to allow users to search, browse, retrieve, and display Solar Terrestrial Physics (STP) and DMSP satellite digital data. SPIDR consists of a WWW interface, online data and information, and interactive display programs, advanced data mining and data retrieval programs.

The SPIDR system currently handles the following: DMSP visible, infrared and microwave browse imagery, ionospheric parameters, geomagnetic 1.0 minute and hourly value data, geophysical and solar indices, GOES x-ray, plasma, and magnetometer data, cosmic ray, solar radio telescope, satellite anomaly and city lights data sets. The goal is to manage and distribute all STP digital holdings through the SPIDR system providing comprehensive and authoritative on-line data services, analysis and numerical modeling to the space physics community.

The successful cooperation between NGDC and CGDS has produced the development of a SPIDR-I mirror in 1997, development and launch of SPIDR-II servers in Boulder, Moscow, and Sydney in 1999, additional SPIDR II mirrors in South Africa and Japan in 2000, and the development of a new satellite data systems prototype in 2001.

This presentation will present details of technologies, and methodologies that were successful in producing exceptional results from a geographically distributed team working in a virtual laboratory environment.

 

5. An Automatic Analysis of Long Geoelectromagnetic Time Series: Determination of the Volcanic Activity Precursors
J. Zlotnicki, Observatorie de Physique du Globe de Clermont-Ferrand, France
J-L. LeMouel, Director of the department of Geomagnetism, Institut de Physique du Globe de Paris, France
S.Agayan, Center of Geophysical Data Studies and Telematics Applications IPE RAS, Russia
Sh. Bogoutdinov, Center of Geophysical Data Studies and Telematics Applications IPE RAS, Russia
A. Gvishiani, Director of the Center of Geophysical Data Studies and Telematics Applications, IPE RAS, Russia
V.Mikhailov, Institute for the Physics of the Earth RAS, Russia
S.Tikhotsky, Institute for the Physics of the Earth RAS, Russia

The new methods developed for the geophysical long time series analysis, based on the fuzzy logic approach. These methods include the algorithms for the determination of anomalous signals. They are specially designed and very efficient in the problems where the definition of anomalous signal is fuzzy, i.e. the general signature, amplitude and frequency of the signal can not be prescribed a priory, as in the case of seeking for the precursors of natural disasters in geophysical records. The developed algorithms are able to determine the intervals of the record that are anomalous withrespect to the background signal presented at the record. Another part of algorithms deal with the morphology analysis of signals. These algorithms were applied for the analysis of the electromagnetic records over La Fournaise volcano (Reunion island). For several years five stations measured the the electric field along different directions. The signals specific for the eruption events are determined and correlated over several stations. Another types of signals that correspond to storms and other sources are also determined and classified. The software is designed that helps to analyze the spatial distribution of activity over stations.


6. Application of telematics approaches for solving the problems of distributed environmental monitoring
M. Zgurovsky, A. Novikov, National Technical University of Ukraine, Kiev Polytechnic Institute

The results of research carried out at the Cybernetics Gloushkov Center of National Academy of Sciences of Ukraine are presented. A review of the advanced developments in the field of distributed environmental monitoring is given.

Among the presented developments - the interactive system of modeling and prognosis of ecological, economic and other processes on the basis of observations for support of taking up quick control decisions. The system is based on the inductive method of arguments group accounting used for automatic extraction of the substantial information from the measurement data. The efficiency of the system is demonstrated on applications of modeling and prognosis of dynamics changes of animal plankton concentration, number of microorganisms in contaminated soil and others.

The designs of the mobile laboratory of the quick radiation monitoring (RAMON) and of the automated system for research of subsoil water processes (NADRA) are presented. Problems of the user interface intelletualization in geophysical software are considered.



Track IV-B-5:
Seismic Data Issues


Chair: A. Gvishiani, Director of the Center of Geophysical Data Studies and Telematics Applications IPE RAS, Russia

1. Clustering of Geophysical Data by New Fuzzy Logic Based Algorithms
S.Agayan, Center of Geophysical Data Studies and Telematics Applications IPE RAS, Russia
Sh. Bogoutdinov, Center of Geophysical Data Studies and Telematics Applications IPE RAS, Russia
A. Gvishiani, Director of the Center of Geophysical Data Studies and Telematics Applications IPE RAS, Russia
M. Diament, Institut de Physique du Globe de Paris (IPGP), France
V.Mikhailov, Institute for the Physics of the Earth RAS, Russia
C. Widiwijayanti, Institut de Physique du Globe de Paris (IPGP), France

A new system of clusterization algorithms, based on geometrical model of illumination in the finite-dimensional space, has been developed recently, using fuzzy sets approach. The two major components of the system are RODIN and CRYSTAL algorithms. These two efficient clusterization tools will be presented along with their applications to seismological, gravity and geomagnetic data analysis. The regions of Malucca Sea (Indonesia) and Gulf of San Malo (France) are under consideration. In the course of study of the very complicated geodynamics of the Malucca sea region the clusterization of earthquakes hypocenters with respect to their position, type of faulting and horizontal displacement strike was performed. The results of this procedure made more clear the stress pattern and hence the geodynamical structure of the region. RODIN algorithm was also applied for clustering of the results of anomalous gravity field pseudo-inversion over this region. It improved the solution considerably and helped to determine the depths and horizontal positions of sources of the gravity anomalies. The obtained results correlate well with the results of the local seismic tomography and gravity inversion. In the region of Gulf of San Malo the developed algorithms was successfully used to investigate the structure of quasi-linear magnetic anomalies onshore and offshore.

 

2. Artificial Intelligence Methods in the Analysis of Large Geophysical Data Bases
A. Gvishiani, Director of the Center of Geophysical Data Studies and Telematics Applications IPE RAS, Russia
J.Bonnin, Institut de Physique du Globe de Strasbourg, France

The presentation is devoted to the different kinds of Artificial Intelligence Algorithms, oriented towards geophysical applications: syntactic pattern recognition, geometrical cluster analysis, time series processing and classification, dynamic pattern recognition with learning and other considerations. A big deal of the presentation is devoted to fuzzy logic and fuzzy mathematics applications to artificial intelligence algorithms development. The following geophysical and environmental applications will be presented: recognition of strong earthquake-proun areas in Alps-Perineas and Caucasus, syntactic classification of seismograms and strong ground motion records, identification of anomalies on geoelectrical and gravity data, use of clustering for the interpretation of geomagnetic data.

 

3. Geo- Environmental Assessment of Flash Flood Hazard of the Safaga Terrain, Egypt, Using Remote Sensing Imagary
Maged L. El Rakaiby, Nuclear Materials Authority, Egypt
Mohamed N. Hegazy, National Authority for Remote Sensing and Space Sciences, Egypt
Menas Kafatos, Center for Earth Observing and Space Research, GMU, USA

We emphasize the use of space images for detecting, interpreting and mapping elements of the geological and geomorphologic environment of the Safaga terrain, Egypt to monitor the geomorphologic elements causing flash floods. Safaga town and associated highways are highly affected by flash floods more than once every year. Information interpreted from space images is very useful for reducing flash flood hazard and adjusting the use of the Safaga terrain.


4. On the Modeling of Fast Variations of the Mode of Deformation of Lithospheric Plates
M. Diament, Organization Institut de Physique du Globe de Paris (IPGP), France
Dubois J.-O., Organization Institut de Physique du Globe (IPGP), France
Kedrov E., Center of Center of Geophysical Data Studies and Telematics Applications IPE RAS, Russia
M. Kovalenko, The State Research Institute of Aviation Systems, Russia
Mikhailov V., Institute for the Physics of the Earth RAS, Russia
Murakami Yu., Geological Siurvey of Japan, Japan

This paper discuses possible applications of the new recently obtainedexact solutions of the elasticity theory problems for the domains having corner points. Analysis of the solutions obtained demonstrated that the mode of deformation in the narrow zones along the boundary of such bodies close to the corner points strongly depends on the work of the surface forces released in these points.

Exact solutions for a rectangle principally differs from the classical exact solutions for unbounded domains (e.g. wedge, infinite stripe etc.) or for domains limited by smooth boundary. The explanation is in the fact that properties of corner points differs considerably from properties of the domain they belong to. In particular, such fundamental notion as surfent of area can not be introduced at the corner point, thus effect of this point can be calculated only as an additional work released in the corner point by some fictitious forces and/or torque which are additional to acting surface forces.

When some interval of a body boundary is of a high curvature or contains a corner point and when boundary loading does not neglect there, then small variations of the shape of the boundary in the vicinity of such interval or corner point can cause finite or even infinite variations of the specific energy. This actually means that Saint-Venant principle is not valid for the areas containing corner points. When boundary of an area is strongly irregular then solution depends on how boundary loading accommodates to the intervals of high boundary curvature.

The results obtained makes it possible to consider corner points of lithospheric plates as singular or "trigger" points, probably responsible for the fast observable changes of the mode of deformation along plate boundaries. These fast changes at the plate boundaries could arise not only in result of variation of boundary forces in the vicinity of corner points but also in result of changes of inner structure and/or rheology inside the plates. The last changes could arise as from decompaction of rocks in the vicinity of corner points (as a consequence of earthquakes, tectonic or thermal processes) or, vice versa; arise from rock compaction, taking place during periods of seismic quietness.

This investigation has been performed by the stuff of virtual laboratory on new solution of the elasticity theory designed and maintained by scientists from Russia, Japan, France and USA in the frameworks of joint project supported by International Science and Technology Center. Website designed supports teleconferences, exchange and presentation of results.

 

5. New Mathematical Approach to Seismotectonic Data Studies
M. Kovalenko and N. Tsybin, State Research Institute of Aviation Systems
Yu. Rebetsky, nstitute of Physics of the Earth, Russia
Yu. Murakami, Geological Survey of Japan, Japan

The paper discuses possible applications of the new recently obtained exact solutions of the some classical problems of the elasticity theory for domains having ruptures. Analysis of the solutions obtained demonstrated that the solution for domains with ruptures is non unique. The explanation is in the fact that the properties of apexes of crack differs considerably from properties of the domain they belong to. The stress distribution strongly depends on the work of the surface forces released in these points. Practically it is a question of the work released on micro level. Thus effect of apexes of crack can be calculated only as an additional work released there.

The results obtained makes it possible to consider apexes of fault of lithospheric plates as trigger points, probably responsible for the fast observable changes of the mode of deformation. These fast changes could arise not only in result of variation of boundary forces in the vicinity of apexes of fault but also in result of changes of inner structure and/or rheology inside the plates. Actually it means, that the crack energy may change without increase/decrease of length of crack.

This study has been performed using virtual laboratory approach designed and maintained by scientists from Russia, Japan, France and USA in the frameworks of joint project supported by International Science and Technology Center. Web-site designed supports teleconferences, exchange and presentation of results.

 

Last site update: 25 September 2002