Tags:
create new tag
, view all tags

Atlas Computing Resources at Cyfronet and PSNC

Machines Storage PLGrid Job manangement Atlas software

News: new resources available from grant atlaslhc2014 in PLGrid #PLGridAnchor

Grid clusters
  • Cyfronet
    • CE CREAM SL6: cream.grid.cyf-kr.edu.pl, cream02.grid.cyf-kr.edu.pl
    • Scientific Linux 6, x86_64, EMI3+32-bit compatibility libraries+python32
    • User interface machines: ui.grid.cyf-kr.edu.pl
  • PSNC

    • CE CREAM SL6: creamce.reef.man.poznan.pl
    • Scientific Linux 6, x86_64, EMI3+32-bit compatibility libraries+python32
    • User interface machines: ui.reef.man.poznan.pl
Storage
  • Storage at Cyfronet:
    • mounted on UI and all EGI WNs, access for regular users, backed up every day
      • quota 5 GB (soft, 7 days allowed) 7 GB (hard)
        /people/< user >
    • mounted on UI and all EGI WNs, should be used to store data, not backed up
      • quota 80GB (soft), 100GB (hard)
        /storage/< user >
    • mounted on UI and all EGI WNs, should be used for computing,
      • files cleaned after 14 days
        /mnt/lustre/scratch< user >
    • mounted only on WNs, access for grid users
      • /home/grid - home for grid users
    • experiment software
      • /cvmfs/atlas.cern.ch/repo/sw/
    • DPM disk arrays, total currently ~340 TB connected
      • Access through SRM interface at dpm.cyf-kr.edu.pl
      • srm://dpm.cyf-kr.edu.pl:8443/dpm/cyf-kr.edu.pl/home/atlas
      • usefull tokens 20 TB on ATLASSCRATCHDISK, 10 TB on ATLASLOCALDISK
    • Internal IDE disks on WN access to local /home and /tmp directories, but only for temporary use during job execution
  • Storage at PSNC:
    • mounted on UI and all EGI WNs, access for local users
      • /home/users/< user >
    • experiment software
      • /cvmfs/atlas.cern.ch/repo/sw/
    • DPM disk arrays, total currently ~340 TB connected
      • Access through SRM interface at dpm.cyf-kr.edu.pl
      • srm://se.reef.man.poznan.pl:8446/dpm/reef.man.poznan.pl/home/atlas/
      • usefull tokens: 20 TB on ATLASSCRATCHDISK

Atlas Data Access Tools
  • DQ2 client commands, described at: https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2Client
  • Installed in Atlas software area, at: /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/DQ2Client/XXX/DQ2Clients/opt/dq2/
  • Setup: use ATLASLocalRootBase method described in PLGrid section
  • ignore warnings about site name mismatch or use "-L ROAMING" option
  • Direct read access with xrootd and https protocols, see details
  • Using xrdcp and curl commands for copying files
  • Using TFile::Open in ROOT programs

Frontier Database access
  • Athena in release >=15.4.0 can now access information from database (geometry, conditions, etc.) using Frontier/Squid web caching.
  • A Frontier server, with integrated Squid proxy, is installed at GridKa, at: http://atlassq1-fzk.gridka.de:8021/fzk
  • Tier2 Squid (test instance) is installed also at CYFRONET-LCG2, using it may help with longer network latencies.
    http://atlas.grid.cyf-kr.edu.pl:3128
  • Default setup is contained in $VO_ATLAS_SW_DIR/local/setup.sh
    Source it to get access to DB info for real data

Resources from PLGrid grants

New resources and services are becoming available from PLGrid and HEPGrid

How to get access:

  • Register as a user from Polish scientific community in PLGrid (see help in HEPGrid link)
  • In the PLGrid account page select folder "Zespoly i Granty", search atlaslhc team and request to be included
  • Contact A. Olszewski to discuss time scale and amount of resources needed
Request personal grid certificate as described at http://nz11-agh1.ifj.edu.pl/do/view/AtlasTier2/Tier2PLGridCertificates

Initialize services from https://portal.plgrid.pl/web/guest/useraccount:

  1. Request access to computing cluster (e.g. REEF, ZEUS)
  2. Request access to UI on computing cluster (e.g. at Cyfronet with ZEUS, at PCSS with REEF)
  3. Subscribe to all services from "Platforma dziedzinowa HEPGrid": e.g CVMFS
Description:
  • Team id: plggatlaslhc, Grant id: atlaslhc2014n1
    at Cyfronet up to: walltime [h]: 30000, total-storage-space [GB]: 10,000

  • Group disk:
    Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc
  • Local jobs:
    Cyfronet: qsub -q plgrid-long -A atlaslhc2014n1
  • Atlas analysis jobs: you can send jobs using pathena, prun
    use options: Cyfronet: --site ANALY_CYF, direct access of files (including xrootd): --pfnList
Job Management
  • Portable Batch System (PBS) (unsupported OpenPBS)
  • Queues on EGI: qstat -q regular, local queues are: l_short, l_long, l_infinite, l_prio queues for Grid VO users: atlas, biomed, alice, ... Job submition and management is done from user interaface machine: ui.cyf-kr.edu.pl qsub -q queue_name job_script qsub -q queue_name l_infinite job_script Interactive work, starting a new session on WN: qsub -I -q l_infinite If this works too slow, one can try using "-q l_prio" If this doesn't help, one can connect by ssh. But first check which nodes are free: pbsnodes -a | grep -B 1 "state = free
Atlas Software
  • Grid production installation: cluster creamce.grid.cyf-kr.edu.pl
  • All releases available in cvmfs catalogs
  • For athena setup see: https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/WorkBook
  • For access to DB conditions for real data: source $VO_ATLAS_SW_DIR/local/setup.sh
  • Setup: use ATLASLocalRootBase (also described in PLGrid help pages for cvmfs service)
  • export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
    source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
    asetup 17.2.13.9,slc5,32 (or localSetupDQ2Client, etc.)
-- AndrzejOlszewski - 18 November 2013
Topic revision: r21 - 2014-11-23 - AndrzejOlszewski
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback