The worldwide LHC Computing Grid in India: the work of CMS and ALICE Tier-2 centres
High Energy Physics, through the Large Hadron Collider (LHC) programme, represents one of the unique science and research facilities shared between India and Europe in the field of scientific research in general and in the ICT domain in particular with the Worldwide LHC Computing Grid (WLCG) project. WLCG is the largest grid infrastructure worldwide, created to address the data requirements of LHC (15 million GBytes per year). India has established a regional WLCG Grid network in India with two Tier-2 centres one at the Tata Institute of Fundamental Research (TIFR) in Mumbai for Compact Muon Spectrometer (CMS) and another at VECC/SINP Kolkata for ALICE (A Large Ion Collider Experiment)plus a number of Tier-3 centres at various Indian universities and of the Indian Department of Atomic Energy (DAE) aided institutions. The migration of WLCG connectivity to NKN in India and the establishment of the 2.5 Gbps TEIN3 link interconnected to NKN provide a substantial burst to the activity of Indian LHC research community allowing these researchers full access to LHC data and widening their possibilities to contribute to the ambitious physics goals of the LCG program.
The LHC is the largest High Energy Physics project in the world, and will make possible high luminosity collisions between protons at 7 + 7 TeV of energy, and between Pb nuclei at 2.76 TeV/nucleon energy in the existing 27 km LEP tunnel managed by CERN and located close to Geneva, Switzerland. EU-IndiaGrid2 HEP application support will leverage on the strong relationship established among INFN, TIFR, BARC, VECC and SAHA institutes and the positive experience acquired during the previous EU-IndiaGrid.
TIFR of Mumbai and VECC/SAHA of Kolkata institutes in particular are managing respectively the CMS and ALICE Tier-2 centres in India, within the framework of the WLCG Infrastructure. A Tier-3 centre for CMS is also managed by BARC in Mumbai. These centres were connected with high speed network to CERN, ASGC, FNAL and CNAF, and process the LHC data since the start of the collisions in autumn 2009.
At CMS T2 at TIFR almost 10 Billion events have been processed since the beginning of 2011. Over thousands of terabytes of CMS data have been retrieved and / or transferred at T2 within last three months. CMS T2 has been specially chosen to be part of LHCONE, the LHC Open Network Environment whose aim is to ensure better access to the most important datasets by the worldwide High Energy Physics community through a collection of access locations that are effectively entry points into the network, and hence improve the data analysis.
The HEP community in India (Government labs & Universities) worked hare to set up the CMS & ALICE detector at LHC, CERN and have collected good quality data sets. The results from the experimental run from both CMS and ALICE have yielded excellent publications in reputed international journals (CMS community has reached a count of 100 papers, all in peer-reviewed journals, out of which 75 are from LHC collision data, 24 from cosmic-ray runs and one from CMS detector paper).
Large volume of data (multiple of terabytes) generated from LHC experiments could be transported & distributed easily to all Indian participants through low latency, high bandwidth NKN and TEIN 3 connectivity. Being part of WLCG, both CMS & ALICE Tier II centres contributed to process power (over 1000 cores) and storage capacity (over 800 Terabytes) & worldwide HEP community could use these resources very effectively. The experience gained in operation & use of this e-infrastructure has benefited immensely, allowing faster adoption of Grid technology & implementation of many new applications such as Open source drug discovery, Climate Change modelling, and e-classrooms, Cancer Grid, Health Grid etc. in India.
The DAE Grid
Within the The Indian Department of Atomic Energy (DAE) communication needs are diverse in nature. To begin with, information is needed to be shared amongst our constituent units in a secure manner. Information exchange amongst other Educational/Research Institutes, which do not belong to the DAE for collaborative work, is also required. Furthermore, access to Internet for carrying out international collaborative work as well as access to the wealth of information available on it are essential for DAE. All these have been realized over the single, nation-wide infrastructure NKN, using the Virtual Routing and Forwarding (VRF) technology.
Envisioning effective sharing of Computational Resources (both hardware and software) amongst the units of DAE, an Intra-DAE grid has been setup interconnecting the computational resources at IGCAR-Kalpakkam, RRCAT-Indore, VECC-Kolkata and BARC-Mumbai , each of which are about a thousand kilometers apart, using the NKN infrastructure. This Grid is operational and is being used extensively by researchers at these institutes. Currently about 800 processor cores which are spread across 7 clusters at these institutes are available for use. Prior to NKN, these resources were interconnected through low speed leased lines and hence the services were limited to providing access to high-end compute clusters, which are spread across the various units of the Department of Atomic Energy. Migration of DAE Grid on to the multi-gigabit, Low-latency Network NKN, has opened new vistas for high-end application software development. Currently work is in progress towards developing the bandwidth-intensive applications like the collaborative design, DAE-wide online classrooms, implementation of disaster recovery mechanisms for critical data /grid services and sharing of laboratory equipments in real time.
Collaborative Design of Reactor Components
The gamut of expertise needed for the development of fast breeder reactor technology is present in distributed form across many institutes of DAE. Thus, different aspects of reactor technology development are addressed by separate groups of experts. Frequent interaction between these groups is required throughout the design process, which involves access to databases, compute-resources, high-end visualization tools and carrying out multi-party video interactions which thereby putting heavy demands on network bandwidth. Furthermore, during manufacturing, constant video interaction is needed between the manufacturer and the expert groups for continued handholding and guidance. The large bandwidth requirements are met by NKN. DAE has utilized and continues to utilize the high bandwidth provided by NKN for such high-end applications.