Difference: Tier2Resources (1 vs. 21)

Revision 212014-11-23 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet and PSNC

Line: 78 to 78
 
  • Team id: plggatlaslhc, Grant id: atlaslhc2014n1
    at Cyfronet up to: walltime [h]: 30000, total-storage-space [GB]: 10,000

  • Group disk:
    Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc
  • Local jobs:
    Cyfronet: qsub -q plgrid-long -A atlaslhc2014n1
Changed:
<
<
  • Atlas analysis jobs: you can send jobs using pathena, prun
    Cyfronet: --site ANALY_CYF
>
>
  • Atlas analysis jobs: you can send jobs using pathena, prun
    use options: Cyfronet: --site ANALY_CYF, direct access of files (including xrootd): --pfnList
 
Job Management
  • Portable Batch System (PBS) (unsupported OpenPBS)

Revision 202014-10-21 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet and PSNC

Line: 79 to 79
 
  • Group disk:
    Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc
  • Local jobs:
    Cyfronet: qsub -q plgrid-long -A atlaslhc2014n1
  • Atlas analysis jobs: you can send jobs using pathena, prun
    Cyfronet: --site ANALY_CYF
Deleted:
<
<
  • One can start local personal pilot jobs to speed up processing:
    • initialize your voms proxy
    • Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc/Panda/submit_pilots.sh (number of pilots) (local queue name) (grant name)
 
Job Management
  • Portable Batch System (PBS) (unsupported OpenPBS)

Revision 192014-10-21 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet and PSNC

Machines Storage PLGrid Job manangement Atlas software

Changed:
<
<
News: new resources available from grant atlaslhc2012 in PLGrid #PLGridAnchor
>
>
News: new resources available from grant atlaslhc2014 in PLGrid #PLGridAnchor
 
Grid clusters
Line: 75 to 75
 
  1. Request access to UI on computing cluster (e.g. at Cyfronet with ZEUS, at PCSS with REEF)
  2. Subscribe to all services from "Platforma dziedzinowa HEPGrid": e.g CVMFS
Description:
Changed:
<
<
  • Team id: plggatlaslhc, Grant id: atlaslhc2014n
    total-walltime [h]: 50000 , total-storage-space [GB]: 50000
    Cyfronet up to: walltime [h]: 35000, total-storage-space [GB]: 50,000
    PSNC up to: walltime [h]: 15000, storage-space [GB]: 1,000

  • Group disk:
    Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc
    PSNC: /home/plgrid-groups/plggatlaslhc
  • Local jobs:
    Cyfronet: qsub -q plgrid-long -A atlaslhc2013
    PSNC: qsub -q plgrid-long -A atlaslhc2013
  • Atlas analysis jobs: you can send jobs using pathena, prun
    Cyfronet: --site ANALY_CYF
    PSNC: --site ANALY_PSNC
>
>
  • Team id: plggatlaslhc, Grant id: atlaslhc2014n1
    at Cyfronet up to: walltime [h]: 30000, total-storage-space [GB]: 10,000

  • Group disk:
    Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc
  • Local jobs:
    Cyfronet: qsub -q plgrid-long -A atlaslhc2014n1
  • Atlas analysis jobs: you can send jobs using pathena, prun
    Cyfronet: --site ANALY_CYF
 
  • One can start local personal pilot jobs to speed up processing:
    • initialize your voms proxy
    • Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc/Panda/submit_pilots.sh (number of pilots) (local queue name) (grant name)
Deleted:
<
<
    • PSNC: /home/plgrid-groups/plggatlaslhc/Panda/submit_pilots.sh (number of pilots) (local queue name) (grant name)
 
Job Management
  • Portable Batch System (PBS) (unsupported OpenPBS)

Revision 182014-10-16 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet and PSNC

Line: 75 to 75
 
  1. Request access to UI on computing cluster (e.g. at Cyfronet with ZEUS, at PCSS with REEF)
  2. Subscribe to all services from "Platforma dziedzinowa HEPGrid": e.g CVMFS
Description:
Changed:
<
<
  • Team id: plggatlaslhc, Grant id: atlaslhc2013
    total-walltime [h]: 50000 , total-storage-space [GB]: 50000
    Cyfronet up to: walltime [h]: 35000, total-storage-space [GB]: 50,000
    PSNC up to: walltime [h]: 15000, storage-space [GB]: 1,000

>
>
  • Team id: plggatlaslhc, Grant id: atlaslhc2014n
    total-walltime [h]: 50000 , total-storage-space [GB]: 50000
    Cyfronet up to: walltime [h]: 35000, total-storage-space [GB]: 50,000
    PSNC up to: walltime [h]: 15000, storage-space [GB]: 1,000

 
  • Group disk:
    Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc
    PSNC: /home/plgrid-groups/plggatlaslhc
  • Local jobs:
    Cyfronet: qsub -q plgrid-long -A atlaslhc2013
    PSNC: qsub -q plgrid-long -A atlaslhc2013
  • Atlas analysis jobs: you can send jobs using pathena, prun
    Cyfronet: --site ANALY_CYF
    PSNC: --site ANALY_PSNC
Line: 99 to 99
 
Changed:
<
<
  • Setup: use ATLASLocalRootBase (also described in PLGrid help pages for cvmfs service)
    asetup 17.2.6.2 4 (or any other available)
  • source $VO_ATLAS_SW_DIR/local/setup.sh
>
>
  • Setup: use ATLASLocalRootBase (also described in PLGrid help pages for cvmfs service)
  • export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
    source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
    asetup 17.2.13.9,slc5,32 (or localSetupDQ2Client, etc.)
 -- AndrzejOlszewski - 18 November 2013 \ No newline at end of file

Revision 172013-12-01 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet and PSNC

Line: 49 to 49
 
  • Installed in Atlas software area, at: /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/DQ2Client/XXX/DQ2Clients/opt/dq2/
  • Setup: use ATLASLocalRootBase method described in PLGrid section
  • ignore warnings about site name mismatch or use "-L ROAMING" option
Added:
>
>
  • Direct read access with xrootd and https protocols, see details
  • Using xrdcp and curl commands for copying files
  • Using TFile::Open in ROOT programs
 
Frontier Database access
  • Athena in release >=15.4.0 can now access information from database (geometry, conditions, etc.) using Frontier/Squid web caching.

Revision 162013-11-19 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet and PSNC

Changed:
<
<
Machines Storage PLGrid Job manangement Atlas software
Tier2 Monitoring
>
>
Machines Storage PLGrid Job manangement Atlas software
  News: new resources available from grant atlaslhc2012 in PLGrid #PLGridAnchor

Grid clusters
  • Cyfronet
Changed:
<
<
    • CE CREAM SL5: cream.grid.cyf-kr.edu.pl, cream02.grid.cyf-kr.edu.pl
    • Scientific Linux 5, x86_64, gLite 3.2+32-bit compatibility libraries+python32
>
>
    • CE CREAM SL6: cream.grid.cyf-kr.edu.pl, cream02.grid.cyf-kr.edu.pl
    • Scientific Linux 6, x86_64, EMI3+32-bit compatibility libraries+python32
 
    • User interface machines: ui.grid.cyf-kr.edu.pl
  • PSNC

Changed:
<
<
    • CE CREAM SL5: creamce.reef.man.poznan.pl
    • Scientific Linux 5, x86_64, gLite 3.2+32-bit compatibility libraries+python32
>
>
    • CE CREAM SL6: creamce.reef.man.poznan.pl
    • Scientific Linux 6, x86_64, EMI3+32-bit compatibility libraries+python32
 
    • User interface machines: ui.reef.man.poznan.pl
Storage
Line: 28 to 28
 
    • mounted only on WNs, access for grid users
      • /home/grid - home for grid users
    • experiment software
Changed:
<
<
      • /software/grid
>
>
      • /cvmfs/atlas.cern.ch/repo/sw/
 
    • DPM disk arrays, total currently ~340 TB connected
      • Access through SRM interface at dpm.cyf-kr.edu.pl
      • srm://dpm.cyf-kr.edu.pl:8443/dpm/cyf-kr.edu.pl/home/atlas
Line: 38 to 38
 
    • mounted on UI and all EGI WNs, access for local users
      • /home/users/< user >
    • experiment software
Changed:
<
<
      • /opt/exp_soft/atlas
>
>
      • /cvmfs/atlas.cern.ch/repo/sw/
 
    • DPM disk arrays, total currently ~340 TB connected
      • Access through SRM interface at dpm.cyf-kr.edu.pl
      • srm://se.reef.man.poznan.pl:8446/dpm/reef.man.poznan.pl/home/atlas/
Line: 46 to 46
 
Atlas Data Access Tools
Changed:
<
<
  • Installed in Atlas software area, at: /software/grid/atlas/ddm
  • Setup: source /software/grid/atlas/ddm/latest/setup.sh
>
>
  • Installed in Atlas software area, at: /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/DQ2Client/XXX/DQ2Clients/opt/dq2/
  • Setup: use ATLASLocalRootBase method described in PLGrid section
 
  • ignore warnings about site name mismatch or use "-L ROAMING" option

Frontier Database access
Line: 55 to 55
 
  • A Frontier server, with integrated Squid proxy, is installed at GridKa, at: http://atlassq1-fzk.gridka.de:8021/fzk
  • Tier2 Squid (test instance) is installed also at CYFRONET-LCG2, using it may help with longer network latencies.
    http://atlas.grid.cyf-kr.edu.pl:3128
  • Default setup is contained in $VO_ATLAS_SW_DIR/local/setup.sh
    Source it to get access to DB info for real data
Deleted:
<
<
 

Resources from PLGrid grants
Line: 66 to 65
 
  • Register as a user from Polish scientific community in PLGrid (see help in HEPGrid link)
  • In the PLGrid account page select folder "Zespoly i Granty", search atlaslhc team and request to be included
  • Contact A. Olszewski to discuss time scale and amount of resources needed
Added:
>
>
Request personal grid certificate as described at http://nz11-agh1.ifj.edu.pl/do/view/AtlasTier2/Tier2PLGridCertificates

Initialize services from https://portal.plgrid.pl/web/guest/useraccount:

  1. Request access to computing cluster (e.g. REEF, ZEUS)
  2. Request access to UI on computing cluster (e.g. at Cyfronet with ZEUS, at PCSS with REEF)
  3. Subscribe to all services from "Platforma dziedzinowa HEPGrid": e.g CVMFS
 Description:
  • Team id: plggatlaslhc, Grant id: atlaslhc2013
    total-walltime [h]: 50000 , total-storage-space [GB]: 50000
    Cyfronet up to: walltime [h]: 35000, total-storage-space [GB]: 50,000
    PSNC up to: walltime [h]: 15000, storage-space [GB]: 1,000

  • Group disk:
    Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc
    PSNC: /home/plgrid-groups/plggatlaslhc
Line: 88 to 93
 
Atlas Software
  • Grid production installation: cluster creamce.grid.cyf-kr.edu.pl
Changed:
<
<
>
>
 
  • For access to DB conditions for real data: source $VO_ATLAS_SW_DIR/local/setup.sh
Changed:
<
<
>
>
  • Setup: use ATLASLocalRootBase (also described in PLGrid help pages for cvmfs service)
    asetup 17.2.6.2 4 (or any other available)
 
  • source $VO_ATLAS_SW_DIR/local/setup.sh
Deleted:
<
<
-- AndrzejOlszewski - 1 March 2013
 \ No newline at end of file
Added:
>
>
-- AndrzejOlszewski - 18 November 2013

Revision 152013-03-12 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet and PSNC

Line: 67 to 67
 
  • In the PLGrid account page select folder "Zespoly i Granty", search atlaslhc team and request to be included
  • Contact A. Olszewski to discuss time scale and amount of resources needed
Description:
Changed:
<
<
  • Team id: plggatlaslhc, Grant id: atlaslhc2012
    total-walltime [h]: 150000 , total-storage-space [GB]: 10000
    Cyfronet up to: walltime [h]: 100000, total-storage-space [GB]: 4,000
    PSNC up to: walltime [h]: 150000, storage-space [GB]: 10,000

>
>
  • Team id: plggatlaslhc, Grant id: atlaslhc2013
    total-walltime [h]: 50000 , total-storage-space [GB]: 50000
    Cyfronet up to: walltime [h]: 35000, total-storage-space [GB]: 50,000
    PSNC up to: walltime [h]: 15000, storage-space [GB]: 1,000

 
  • Group disk:
    Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc
    PSNC: /home/plgrid-groups/plggatlaslhc
Changed:
<
<
  • Local jobs:
    Cyfronet: qsub -q plgrid-long -A atlaslhc2012
    PSNC: qsub -q plgrid-long -A atlaslhc2012
>
>
  • Local jobs:
    Cyfronet: qsub -q plgrid-long -A atlaslhc2013
    PSNC: qsub -q plgrid-long -A atlaslhc2013
 
  • Atlas analysis jobs: you can send jobs using pathena, prun
    Cyfronet: --site ANALY_CYF
    PSNC: --site ANALY_PSNC
  • One can start local personal pilot jobs to speed up processing:
    • initialize your voms proxy
Line: 101 to 101
 
    • setup DBRelease environment
      export DBRELEASE_INSTALLDIR=/home/people/b14olsze/Atlas DBRELEASE_VERSION=7.7.1
      export ATLAS_DB_AREA=${DBRELEASE_INSTALLDIR}
      export DBRELEASE_OVERRIDE=${DBRELEASE_VERSION}
  • export AtlasSetup=/mnt/auto/software/grid/atlas/prod/releases/rel_17-25/AtlasSetup
    alias asetup='source $AtlasSetup/scripts/asetup.sh'
    asetup 17.2.6.2 4 (or any other available)
  • source $VO_ATLAS_SW_DIR/local/setup.sh
Deleted:
<
<
-- AndrzejOlszewski - 20 Oct 2012
 \ No newline at end of file
Added:
>
>
-- AndrzejOlszewski - 1 March 2013
 \ No newline at end of file

Revision 142012-12-13 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet and PSNC

Line: 72 to 72
 
  • Local jobs:
    Cyfronet: qsub -q plgrid-long -A atlaslhc2012
    PSNC: qsub -q plgrid-long -A atlaslhc2012
  • Atlas analysis jobs: you can send jobs using pathena, prun
    Cyfronet: --site ANALY_CYF
    PSNC: --site ANALY_PSNC
  • One can start local personal pilot jobs to speed up processing:
Added:
>
>
    • initialize your voms proxy
 
    • Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc/Panda/submit_pilots.sh (number of pilots) (local queue name) (grant name)
    • PSNC: /home/plgrid-groups/plggatlaslhc/Panda/submit_pilots.sh (number of pilots) (local queue name) (grant name)

Revision 132012-10-21 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet and PSNC

Machines Storage PLGrid Job manangement Atlas software
Tier2 Monitoring

Changed:
<
<
News: new resources available from grant atlaslhc2012 in PLGrid [#PLGridAnchor]
>
>
News: new resources available from grant atlaslhc2012 in PLGrid #PLGridAnchor
 
Grid clusters
Line: 12 to 12
 
    • CE CREAM SL5: cream.grid.cyf-kr.edu.pl, cream02.grid.cyf-kr.edu.pl
    • Scientific Linux 5, x86_64, gLite 3.2+32-bit compatibility libraries+python32
    • User interface machines: ui.grid.cyf-kr.edu.pl
Changed:
<
<
>
>
  • PSNC

 
    • CE CREAM SL5: creamce.reef.man.poznan.pl
    • Scientific Linux 5, x86_64, gLite 3.2+32-bit compatibility libraries+python32
    • User interface machines: ui.reef.man.poznan.pl
Line: 19 to 18
 
    • User interface machines: ui.reef.man.poznan.pl
Storage
Deleted:
<
<
 
  • Storage at Cyfronet:
Changed:
<
<
    • mounted on UI and all EGEE WNs, access for regular users, backed up every day
>
>
    • mounted on UI and all EGI WNs, access for regular users, backed up every day
 
      • quota 5 GB (soft, 7 days allowed) 7 GB (hard)
        /people/< user >
Changed:
<
<
    • mounted on UI and all EGEE WNs, should be used to store data, not backed up
>
>
    • mounted on UI and all EGI WNs, should be used to store data, not backed up
 
      • quota 80GB (soft), 100GB (hard)
        /storage/< user >
Changed:
<
<
    • mounted on UI and all EGEE WNs, should be used for computing,
>
>
    • mounted on UI and all EGI WNs, should be used for computing,
 
      • files cleaned after 14 days
        /mnt/lustre/scratch< user >
    • mounted only on WNs, access for grid users
      • /home/grid - home for grid users
    • experiment software
      • /software/grid
Changed:
<
<
    • Large Sun Thumper disk arrays, total currently ~340 TB connected
>
>
    • DPM disk arrays, total currently ~340 TB connected
 
      • Access through SRM interface at dpm.cyf-kr.edu.pl
      • srm://dpm.cyf-kr.edu.pl:8443/dpm/cyf-kr.edu.pl/home/atlas
Added:
>
>
      • usefull tokens 20 TB on ATLASSCRATCHDISK, 10 TB on ATLASLOCALDISK
 
    • Internal IDE disks on WN access to local /home and /tmp directories, but only for temporary use during job execution
Changed:
<
<
>
>
  • Storage at PSNC:
    • mounted on UI and all EGI WNs, access for local users
      • /home/users/< user >
    • experiment software
      • /opt/exp_soft/atlas
    • DPM disk arrays, total currently ~340 TB connected
      • Access through SRM interface at dpm.cyf-kr.edu.pl
      • srm://se.reef.man.poznan.pl:8446/dpm/reef.man.poznan.pl/home/atlas/
      • usefull tokens: 20 TB on ATLASSCRATCHDISK

Atlas Data Access Tools
 
    • Installed in Atlas software area, at: /software/grid/atlas/ddm
Changed:
<
<
    • Setup: source /software/grid/atlas/ddm/latest/setup.sh
      ignore warnings about site name mismatch or use "-L ROAMING" option
  • Frontier Database access
>
>
  • Setup: source /software/grid/atlas/ddm/latest/setup.sh
  • ignore warnings about site name mismatch or use "-L ROAMING" option

Frontier Database access
 
    • Athena in release >=15.4.0 can now access information from database (geometry, conditions, etc.) using Frontier/Squid web caching.
    • A Frontier server, with integrated Squid proxy, is installed at GridKa, at: http://atlassq1-fzk.gridka.de:8021/fzk
    • Tier2 Squid (test instance) is installed also at CYFRONET-LCG2, using it may help with longer network latencies.
      http://atlas.grid.cyf-kr.edu.pl:3128
    • Default setup is contained in $VO_ATLAS_SW_DIR/local/setup.sh
      Source it to get access to DB info for real data
Changed:
<
<
>
>
 
Added:
>
>
 
Resources from PLGrid grants

New resources and services are becoming available from PLGrid and HEPGrid

Line: 51 to 61
 
Resources from PLGrid grants

New resources and services are becoming available from PLGrid and HEPGrid

Changed:
<
<
  • Team: plggatlaslhc, grant atlaslhc2012
    total-walltime [h]: 150000 , total-storage-space [GB]: 10000
    Cyfronet up to: walltime [h]: 100000, total-storage-space [GB]: 4,000
    PSNC up to: walltime [h]: 150000, storage-space [GB]: 10,000

>
>
How to get access:
  • Register as a user from Polish scientific community in PLGrid (see help in HEPGrid link)
  • In the PLGrid account page select folder "Zespoly i Granty", search atlaslhc team and request to be included
  • Contact A. Olszewski to discuss time scale and amount of resources needed
Description:
  • Team id: plggatlaslhc, Grant id: atlaslhc2012
    total-walltime [h]: 150000 , total-storage-space [GB]: 10000
    Cyfronet up to: walltime [h]: 100000, total-storage-space [GB]: 4,000
    PSNC up to: walltime [h]: 150000, storage-space [GB]: 10,000

 
  • Group disk:
    Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc
    PSNC: /home/plgrid-groups/plggatlaslhc
  • Local jobs:
    Cyfronet: qsub -q plgrid-long -A atlaslhc2012
    PSNC: qsub -q plgrid-long -A atlaslhc2012
  • Atlas analysis jobs: you can send jobs using pathena, prun
    Cyfronet: --site ANALY_CYF
    PSNC: --site ANALY_PSNC
Line: 66 to 82
 
Changed:
<
<
    • PBS commands can be executed on UI and WN nodes, but jobs are submitted only from UI
  • Queues on EGEE: qstat -q regular, local queues are: l_short, l_long, l_infinite, l_prio queues for Grid VO users: atlas, biomed, alice, ... Job submition and management is done from user interaface machine: ui.cyf-kr.edu.pl qsub -q queue_name job_script qsub -q queue_name l_infinite job_script Interactive work, starting a new session on WN: qsub -I -q l_infinite If this works too slow, one can try using "-q l_prio" If this doesn't help, one can connect by ssh. But first check which nodes are free: pbsnodes -a | grep -B 1 "state = free
>
>
    • PBS commands (example at Cyfronet) can be executed on UI and WN nodes, but jobs are submitted only from UI
  • Queues on EGI: qstat -q regular, local queues are: l_short, l_long, l_infinite, l_prio queues for Grid VO users: atlas, biomed, alice, ... Job submition and management is done from user interaface machine: ui.cyf-kr.edu.pl qsub -q queue_name job_script qsub -q queue_name l_infinite job_script Interactive work, starting a new session on WN: qsub -I -q l_infinite If this works too slow, one can try using "-q l_prio" If this doesn't help, one can connect by ssh. But first check which nodes are free: pbsnodes -a | grep -B 1 "state = free
 
Atlas Software
Changed:
<
<
  • Grid production installation: cluster ce.grid.cyf-kr.edu.pl
>
>
  • Grid production installation: cluster creamce.grid.cyf-kr.edu.pl
 
Line: 82 to 98
 
    • install DBRelease with Pacman (or just download and unpack)
    • cd /home/people/b14olsze/Atlas
      pacman -allow trust-all-caches -get http://atlas.web.cern.ch/Atlas/GROUPS/DATABASE/pacman4/DBRelease:DBRelease-7.7.1 2)
    • setup DBRelease environment
      export DBRELEASE_INSTALLDIR=/home/people/b14olsze/Atlas DBRELEASE_VERSION=7.7.1
      export ATLAS_DB_AREA=${DBRELEASE_INSTALLDIR}
      export DBRELEASE_OVERRIDE=${DBRELEASE_VERSION}
Changed:
<
<
  • export AtlasSetup=/mnt/auto/software/grid/atlas/prod/releases/rel_17-25/AtlasSetup
    alias asetup='source $AtlasSetup/scripts/asetup.sh'
    asetup 17.2.6.2 4
>
>
  • export AtlasSetup=/mnt/auto/software/grid/atlas/prod/releases/rel_17-25/AtlasSetup
    alias asetup='source $AtlasSetup/scripts/asetup.sh'
    asetup 17.2.6.2 4 (or any other available)
 
  • source $VO_ATLAS_SW_DIR/local/setup.sh
Changed:
<
<
-- AndrzejOlszewski - 8 Oct 2012
>
>
-- AndrzejOlszewski - 20 Oct 2012

Revision 122012-10-20 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"
Changed:
<
<

Atlas Computing Resources at Cyfronet

>
>

Atlas Computing Resources at Cyfronet and PSNC

  Machines Storage PLGrid Job manangement Atlas software
Tier2 Monitoring

News: new resources available from grant atlaslhc2012 in PLGrid [#PLGridAnchor]

Changed:
<
<
Clusters available for Atlas users:
  • EEGE (LCG) CE CREAM SL5:
    cream.grid.cyf-kr.edu.pl, cream02.grid.cyf-kr.edu.pl
  • Scientific Linux 5.4, x86_64, gLite 3.2+32-bit compatibility libraries+python32
  • Total of 10,656 WN cores User interface machines: ui.grid.cyf-kr.edu.pl
>
>
Grid clusters
  • Cyfronet
    • CE CREAM SL5: cream.grid.cyf-kr.edu.pl, cream02.grid.cyf-kr.edu.pl
    • Scientific Linux 5, x86_64, gLite 3.2+32-bit compatibility libraries+python32
    • User interface machines: ui.grid.cyf-kr.edu.pl
  • PSNC
    • CE CREAM SL5: creamce.reef.man.poznan.pl
    • Scientific Linux 5, x86_64, gLite 3.2+32-bit compatibility libraries+python32
    • User interface machines: ui.reef.man.poznan.pl
 
Storage
Changed:
<
<
  • Storage:
>
>
  • Storage at Cyfronet:
 
    • mounted on UI and all EGEE WNs, access for regular users, backed up every day
      • quota 5 GB (soft, 7 days allowed) 7 GB (hard)
        /people/< user >
Deleted:
<
<
    • needs to be migrated to current home, with separated backed up and not backup up areas
      • /people/< user >/old_home
 
    • mounted on UI and all EGEE WNs, should be used to store data, not backed up
      • quota 80GB (soft), 100GB (hard)
        /storage/< user >
    • mounted on UI and all EGEE WNs, should be used for computing,
Changed:
<
<
      • files cleaned after 14 days
        /scratch-lustre/< user >
>
>
      • files cleaned after 14 days
        /mnt/lustre/scratch< user >
 
    • mounted only on WNs, access for grid users
      • /home/grid - home for grid users
    • experiment software
Line: 31 to 35
 
      • Access through SRM interface at dpm.cyf-kr.edu.pl
      • srm://dpm.cyf-kr.edu.pl:8443/dpm/cyf-kr.edu.pl/home/atlas
    • Internal IDE disks on WN access to local /home and /tmp directories, but only for temporary use during job execution
Deleted:
<
<
    • More info on access to data on storage at Cyfronet
 
Line: 48 to 51
 
Resources from PLGrid grants

New resources and services are becoming available from PLGrid and HEPGrid

Changed:
<
<
  • Team: plggatlaslhc, grant atlaslhc2012
    total-walltime [h]: 150000 , total-storage-space [GB]: 10000
    Cyfronet: total-walltime [h]: 100000, total-storage-space [GB]: 4000

  • Group disk: /mnt/lustre/scratch/groups/plggatlaslhc
  • Local jobs: qsub -q l_long -A atlaslhc2012
  • Atlas analysis jobs: wyslac zadania przy pomocy pathena, prun
>
>
  • Team: plggatlaslhc, grant atlaslhc2012
    total-walltime [h]: 150000 , total-storage-space [GB]: 10000
    Cyfronet up to: walltime [h]: 100000, total-storage-space [GB]: 4,000
    PSNC up to: walltime [h]: 150000, storage-space [GB]: 10,000

  • Group disk:
    Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc
    PSNC: /home/plgrid-groups/plggatlaslhc
  • Local jobs:
    Cyfronet: qsub -q plgrid-long -A atlaslhc2012
    PSNC: qsub -q plgrid-long -A atlaslhc2012
  • Atlas analysis jobs: you can send jobs using pathena, prun
    Cyfronet: --site ANALY_CYF
    PSNC: --site ANALY_PSNC
 
  • One can start local personal pilot jobs to speed up processing:
Changed:
<
<
    • /mnt/lustre/scratch/groups/plggatlaslhc/Panda/submit_pilots.sh (number of pilots) (local queue name) (grant name)
>
>
    • Cyfronet: /mnt/lustre/scratch/groups/plggatlaslhc/Panda/submit_pilots.sh (number of pilots) (local queue name) (grant name)
    • PSNC: /home/plgrid-groups/plggatlaslhc/Panda/submit_pilots.sh (number of pilots) (local queue name) (grant name)
 
Job Management
  • Portable Batch System (PBS) (unsupported OpenPBS)

Revision 112012-10-08 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet

Changed:
<
<
Machines Storage Job manangement Atlas software
Tier2 Monitoring
>
>
Machines Storage PLGrid Job manangement Atlas software
Tier2 Monitoring
 
Changed:
<
<
News: new resources available from grant atlaslhc2012 in PLGrid
>
>
News: new resources available from grant atlaslhc2012 in PLGrid [#PLGridAnchor]
 
Changed:
<
<
     Clusters available for Atlas users:

           EEGE (LCG) 
                             CE CREAM SL5: cream.grid.cyf-kr.edu.pl:8443/cream-pbs-atlas5
                             Scientific Linux 5.4, x86_64, gLite 3.2+32-bit compatibility libraries+python32 
                Total of 2396 WN cores                 
 

     User interface machines: ui.grid.cyf-kr.edu.pl
>
>
Clusters available for Atlas users:
  • EEGE (LCG) CE CREAM SL5:
    cream.grid.cyf-kr.edu.pl, cream02.grid.cyf-kr.edu.pl
  • Scientific Linux 5.4, x86_64, gLite 3.2+32-bit compatibility libraries+python32
  • Total of 10,656 WN cores User interface machines: ui.grid.cyf-kr.edu.pl
 
Changed:
<
<
Storage

         Storage:  
              - mounted on UI and all EGEE WNs, access for regular users, backed up every day
                quota 5 GB (soft, 7 days allowed) 7 GB (hard)
                  /people/< user >
                needs to be migrated to current home, with separated backed up and not backup up areas
                  /people/< user >/old_home 
              - mounted on UI and all EGEE WNs, should be used to store data, not backed up
                quota 80GB (soft), 100GB (hard)
                  /storage/< user >
              - mounted on UI and all EGEE WNs, should be used for computing, files cleaned after 14 days
                  /scratch-lustre/< user >
              - mounted only on WNs, access for grid users
                  /home/grid  - home for grid users
              - experiment software
                  /software/grid 
            
             Large Sun Thumper disk arrays, total currently ~240 TB connected
             Access through SRM interface at dpm.cyf-kr.edu.pl
             srm://dpm.cyf-kr.edu.pl:8443/dpm/cyf-kr.edu.pl/home/atlas

             Internal IDE disks on WN
                access to local /home and /tmp directories, but only for temporary use during job execution

             More info on access to data on storage at Cyfronet

Atlas Data Access Tools

             DQ2 client commands, described at: https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2Clients
             Installed in Atlas software area, at: /software/grid/atlas/ddm
             Setup: source /software/grid/atlas/ddm/latest/setup.sh
                        ignore warnings about site name mismatch or use "-L ROAMING" option

Frontier Database access

             Athena in release >=15.4.0 can now access information from database (geometry, conditions, etc.) 
             using Frontier/Squid web caching.
             A Frontier server, with integrated Squid proxy, is installed at GridKa, at: 
             http://atlassq1-fzk.gridka.de:8021/fzk
             Tier2 Squid (test instance) is installed also at CYFRONET-LCG2, 
             using it may help with longer network latencies. 
             http://atlas.grid.cyf-kr.edu.pl:3128
             Default setup is contained in $VO_ATLAS_SW_DIR/local/setup.sh 
             Source it to get access to DB info for real data

             see also: 
                Proposal for DBase access method:
                https://twiki.cern.ch/twiki/bin/view/Atlas/RemoteConditionsDataAccess
                https://twiki.cern.ch/twiki/bin/view/Atlas/T2SquidDeployment
                RACF Frontier pages:
                https://www.racf.bnl.gov/docs/services/frontier 
>
>
Storage
 
Added:
>
>
  • Storage:
    • mounted on UI and all EGEE WNs, access for regular users, backed up every day
      • quota 5 GB (soft, 7 days allowed) 7 GB (hard)
        /people/< user >
    • needs to be migrated to current home, with separated backed up and not backup up areas
      • /people/< user >/old_home
    • mounted on UI and all EGEE WNs, should be used to store data, not backed up
      • quota 80GB (soft), 100GB (hard)
        /storage/< user >
    • mounted on UI and all EGEE WNs, should be used for computing,
      • files cleaned after 14 days
        /scratch-lustre/< user >
    • mounted only on WNs, access for grid users
      • /home/grid - home for grid users
    • experiment software
      • /software/grid
    • Large Sun Thumper disk arrays, total currently ~340 TB connected
      • Access through SRM interface at dpm.cyf-kr.edu.pl
      • srm://dpm.cyf-kr.edu.pl:8443/dpm/cyf-kr.edu.pl/home/atlas
    • Internal IDE disks on WN access to local /home and /tmp directories, but only for temporary use during job execution
    • More info on access to data on storage at Cyfronet
  • Atlas Data Access Tools
    • DQ2 client commands, described at: https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2Clients
    • Installed in Atlas software area, at: /software/grid/atlas/ddm
    • Setup: source /software/grid/atlas/ddm/latest/setup.sh
      ignore warnings about site name mismatch or use "-L ROAMING" option
  • Frontier Database access
Resources from PLGrid grants

New resources and services are becoming available from PLGrid and HEPGrid

  • Team: plggatlaslhc, grant atlaslhc2012
    total-walltime [h]: 150000 , total-storage-space [GB]: 10000
    Cyfronet: total-walltime [h]: 100000, total-storage-space [GB]: 4000

  • Group disk: /mnt/lustre/scratch/groups/plggatlaslhc
  • Local jobs: qsub -q l_long -A atlaslhc2012
  • Atlas analysis jobs: wyslac zadania przy pomocy pathena, prun
  • One can start local personal pilot jobs to speed up processing:
    • /mnt/lustre/scratch/groups/plggatlaslhc/Panda/submit_pilots.sh (number of pilots) (local queue name) (grant name)
 
Changed:
<
<
Job Management

         Portable Batch System (PBS) (unsupported OpenPBS)
            Batch job submition and workload management system for a Linux cluster.
            More info on man pages: man pbs ...
                                 User guides on other sites: http://www.doesciencegrid.org/public/pbs/homepage.html
                                                                             http://hpc.sissa.it/pbs/
                                 mini HowTo:                      http://dcwww.camp.dtu.dk/pbs.html
                                 Linux Magazine article:     http://www.linux-mag.com/2002-10/extreme_01.html

             PBS commands can be executed on UI and WN nodes, but jobs are submitted only from UI

             Queues on EGEE: qstat -q
                                                  regular, local queues are: l_short, l_long, l_infinite, l_prio
                                                  queues for Grid VO users: atlas, biomed, alice, ...

              Job submition and management is done from user interaface machine: ui.cyf-kr.edu.pl
                 qsub -q queue_name job_script  
                 qsub -q queue_name l_infinite job_script

              Interactive work, starting a new session on WN:
                 qsub -I -q l_infinite

              If this works too slow, one can try using "-q l_prio"
              If this doesn't help, one can connect by ssh. But first check which nodes are free:
              pbsnodes -a | grep -B 1 "state = free

>
>
Job Management
  • Portable Batch System (PBS) (unsupported OpenPBS)
  • Queues on EGEE: qstat -q regular, local queues are: l_short, l_long, l_infinite, l_prio queues for Grid VO users: atlas, biomed, alice, ... Job submition and management is done from user interaface machine: ui.cyf-kr.edu.pl qsub -q queue_name job_script qsub -q queue_name l_infinite job_script Interactive work, starting a new session on WN: qsub -I -q l_infinite If this works too slow, one can try using "-q l_prio" If this doesn't help, one can connect by ssh. But first check which nodes are free: pbsnodes -a | grep -B 1 "state = free
 
Changed:
<
<
Atlas Software

       Grid production installation:       
           Cluster ce.grid.cyf-kr.edu.pl
     logical catalog: /software/grid/atlas/software/${release
     physical catalog: /software/grid/atlas/prod/releases/rel*

To check releases currently available see: https://atlas-install.roma1.infn.it/atlas_install/
To lock release at the Cyfronet site go to: https://atlas-install.roma1.infn.it/atlas_install/protected/pin.php

For athena setup see:  https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBook
For access to DB conditions for real data: source $VO_ATLAS_SW_DIR/local/setup.sh

Prescripton for use of non-default DBRelease: 
see also https://twiki.cern.ch/twiki/bin/view/Atlas/AtlasDBRelease#Distribution_Installation

example:
1) install DBRelease with Pacman (or just download and unpack)
    cd /home/people/b14olsze/Atlas
    pacman -allow trust-all-caches -get http://atlas.web.cern.ch/Atlas/GROUPS/DATABASE/pacman4/DBRelease:DBRelease-7.7.1
2) setup DBRelease environment
    export DBRELEASE_INSTALLDIR=/home/people/b14olsze/Atlas
    DBRELEASE_VERSION=7.7.1
    export ATLAS_DB_AREA=${DBRELEASE_INSTALLDIR}
    export DBRELEASE_OVERRIDE=${DBRELEASE_VERSION}
3) setup athena (kit/release) environment
4) source $VO_ATLAS_SW_DIR/local/setup.sh

-- AndrzejOlszewski - 15 Feb 2010

>
>
Atlas Software
-- AndrzejOlszewski - 8 Oct 2012

Revision 102012-10-08 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet

Changed:
<
<
Machines Storage Job manangement Atlas software
>
>
Machines Storage Job manangement Atlas software
Tier2 Monitoring
 
Changed:
<
<
News: atlas moved to use only SL5 WNs with new atlas software area in effect.
>
>
News: new resources available from grant atlaslhc2012 in PLGrid
 
Changed:
<
<
Computing Machines
     Ganglia monitoring:  https://zeus21.cyf-kr.edu.pl/ganglia

>
>
     Clusters available for Atlas users:

 
Changed:
<
<
Clusters available for Atlas users:
EEGE (LCG) CE SL5: ce.grid.cyf-kr.edu.pl:2119/jobmanager-pbs-atlas5
>
>
EEGE (LCG)
  CE CREAM SL5: cream.grid.cyf-kr.edu.pl:8443/cream-pbs-atlas5 Scientific Linux 5.4, x86_64, gLite 3.2+32-bit compatibility libraries+python32 Total of 2396 WN cores
Line: 22 to 19
 

Changed:
<
<
Storage

>
>
Storage

 
Storage
- mounted on UI and all EGEE WNs, access for regular users, backed up every day
Line: 59 to 55
  Frontier Database access
Changed:
<
<
Athena in release >=15.4.0 can now access information from database (geometry, conditions, etc.)
>
>
Athena in release >=15.4.0 can now access information from database (geometry, conditions, etc.)
  using Frontier/Squid web caching. A Frontier server, with integrated Squid proxy, is installed at GridKa, at: http://atlassq1-fzk.gridka.de:8021/fzk
Line: 78 to 74
 

Changed:
<
<
Job Management

>
>
Job Management

  Portable Batch System (PBS) (unsupported OpenPBS) Batch job submition and workload management system for a Linux cluster.
Line: 109 to 104
 

Changed:
<
<
Atlas Software

>
>
Atlas Software

  Grid production installation: Cluster ce.grid.cyf-kr.edu.pl

Revision 92010-06-23 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet

Line: 26 to 26
 Storage

Storage
Changed:
<
<
mounted on UI and all EGEE WNs, access for regular users /home/people mounted on EGEE UI and WNs, access for grid users
>
>
- mounted on UI and all EGEE WNs, access for regular users, backed up every day quota 5 GB (soft, 7 days allowed) 7 GB (hard) /people/< user > needs to be migrated to current home, with separated backed up and not backup up areas /people/< user >/old_home - mounted on UI and all EGEE WNs, should be used to store data, not backed up quota 80GB (soft), 100GB (hard) /storage/< user > - mounted on UI and all EGEE WNs, should be used for computing, files cleaned after 14 days /scratch-lustre/< user > - mounted only on WNs, access for grid users
  /home/grid - home for grid users
Changed:
<
<
/software/grid - experiment software
>
>
- experiment software /software/grid
  Large Sun Thumper disk arrays, total currently ~240 TB connected Access through SRM interface at dpm.cyf-kr.edu.pl

Revision 82010-03-22 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet

Line: 28 to 28
 
Storage
mounted on UI and all EGEE WNs, access for regular users /home/people
Changed:
<
<
mounted on EGEE WNs only, access for grid users
>
>
mounted on EGEE UI and WNs, access for grid users
  /home/grid - home for grid users
Changed:
<
<
/software - experiment software
>
>
/software/grid - experiment software
  Large Sun Thumper disk arrays, total currently ~240 TB connected Access through SRM interface at dpm.cyf-kr.edu.pl
Line: 44 to 44
 Atlas Data Access Tools

DQ2 client commands, described at: https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2Clients

Changed:
<
<
Installed in Atlas software area, at: /software/atlas/ddm Available for SL4, to be used only on WN machines Setup: source /software/atlas/ddm/latest/setup.sh
>
>
Installed in Atlas software area, at: /software/grid/atlas/ddm Setup: source /software/grid/atlas/ddm/latest/setup.sh
  ignore warnings about site name mismatch or use "-L ROAMING" option

Frontier Database access

Line: 105 to 104
 Atlas Software

Grid production installation:

Changed:
<
<
Cluster ce.cyf-kr.edu.pl
>
>
Cluster ce.grid.cyf-kr.edu.pl
  logical catalog: /software/grid/atlas/software/${release physical catalog: /software/grid/atlas/prod/releases/rel*

Revision 72010-02-26 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet

Machines Storage Job manangement Atlas software

Added:
>
>
News: atlas moved to use only SL5 WNs with new atlas software area in effect.
 
Computing Machines
     Ganglia monitoring:  https://zeus21.cyf-kr.edu.pl/ganglia

Clusters available for Atlas users:
Changed:
<
<
EEGE (LCG) CE: ce.cyf-kr.edu.pl:2119/jobmanager-pbs-atlas Scientific Linux 4.5, x86_64, gLite 3.1+32-bit compatibility libraries+python32 CE SL5: ce.grid.cyf-kr.edu.pl:2119/jobmanager-pbs-atlas CE CREAM SL5: creamce.grid.cyf-kr.edu.pl:2119/jobmanager-pbs-atlas
>
>
EEGE (LCG) CE SL5: ce.grid.cyf-kr.edu.pl:2119/jobmanager-pbs-atlas5 CE CREAM SL5: cream.grid.cyf-kr.edu.pl:8443/cream-pbs-atlas5
  Scientific Linux 5.4, x86_64, gLite 3.2+32-bit compatibility libraries+python32 Total of 2396 WN cores

Changed:
<
<
User interface machines: SL3 (soon SL5): ui.cyf-kr.edu.pl SL4 (recommended): ui.grid.cyf-kr.edu.pl
>
>
User interface machines: ui.grid.cyf-kr.edu.pl
 

Line: 107 to 106
  Grid production installation: Cluster ce.cyf-kr.edu.pl
Changed:
<
<
logical catalog: /software/atlas/software/${release physical catalog: /software/atlas/prod/releases/rel*
>
>
logical catalog: /software/grid/atlas/software/${release physical catalog: /software/grid/atlas/prod/releases/rel*
  To check releases currently available see: https://atlas-install.roma1.infn.it/atlas_install/ To lock release at the Cyfronet site go to: https://atlas-install.roma1.infn.it/atlas_install/protected/pin.php
Line: 132 to 131
 4) source $VO_ATLAS_SW_DIR/local/setup.sh
Deleted:
<
<
-- AndrzejOlszewski - 15 Jan 2010
 \ No newline at end of file
Added:
>
>
-- AndrzejOlszewski - 15 Feb 2010
 \ No newline at end of file

Revision 62010-01-16 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet

Line: 7 to 7
 
Computing Machines

Changed:
<
<
Ganglia monitoring: http://zeus21.cyf-kr.edu.pl/ganglia

>
>
Ganglia monitoring: https://zeus21.cyf-kr.edu.pl/ganglia

  Clusters available for Atlas users:
EEGE (LCG) CE: ce.cyf-kr.edu.pl:2119/jobmanager-pbs-atlas
Deleted:
<
<
Total of 2396 WN cores
  Scientific Linux 4.5, x86_64, gLite 3.1+32-bit compatibility libraries+python32
Added:
>
>
CE SL5: ce.grid.cyf-kr.edu.pl:2119/jobmanager-pbs-atlas CE CREAM SL5: creamce.grid.cyf-kr.edu.pl:2119/jobmanager-pbs-atlas Scientific Linux 5.4, x86_64, gLite 3.2+32-bit compatibility libraries+python32 Total of 2396 WN cores

 
Changed:
<
<
User interface machines: ui.cyf-kr.edu.pl
>
>
User interface machines: SL3 (soon SL5): ui.cyf-kr.edu.pl SL4 (recommended): ui.grid.cyf-kr.edu.pl
 

Line: 55 to 60
  using it may help with longer network latencies. http://atlas.grid.cyf-kr.edu.pl:3128 Default setup is contained in $VO_ATLAS_SW_DIR/local/setup.sh
Added:
>
>
Source it to get access to DB info for real data
  see also: Proposal for DBase access method: https://twiki.cern.ch/twiki/bin/view/Atlas/RemoteConditionsDataAccess
Added:
>
>
https://twiki.cern.ch/twiki/bin/view/Atlas/T2SquidDeployment
  RACF Frontier pages: https://www.racf.bnl.gov/docs/services/frontier
Line: 107 to 114
 To lock release at the Cyfronet site go to: https://atlas-install.roma1.infn.it/atlas_install/protected/pin.php

For athena setup see: https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBook

Added:
>
>
For access to DB conditions for real data: source $VO_ATLAS_SW_DIR/local/setup.sh
  Prescripton for use of non-default DBRelease: see also https://twiki.cern.ch/twiki/bin/view/Atlas/AtlasDBRelease#Distribution_Installation
Line: 121 to 129
  export ATLAS_DB_AREA=${DBRELEASE_INSTALLDIR} export DBRELEASE_OVERRIDE=${DBRELEASE_VERSION} 3) setup athena (kit/release) environment
Added:
>
>
4) source $VO_ATLAS_SW_DIR/local/setup.sh
 
Deleted:
<
<
-- AndrzejOlszewski - 16 Mar 2009
 \ No newline at end of file
Added:
>
>
-- AndrzejOlszewski - 15 Jan 2010
 \ No newline at end of file

Revision 52009-12-11 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet

Line: 108 to 108
  For athena setup see: https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBook
Added:
>
>
Prescripton for use of non-default DBRelease: see also https://twiki.cern.ch/twiki/bin/view/Atlas/AtlasDBRelease#Distribution_Installation

example: 1) install DBRelease with Pacman (or just download and unpack) cd /home/people/b14olsze/Atlas pacman -allow trust-all-caches -get http://atlas.web.cern.ch/Atlas/GROUPS/DATABASE/pacman4/DBRelease:DBRelease-7.7.1 2) setup DBRelease environment export DBRELEASE_INSTALLDIR=/home/people/b14olsze/Atlas DBRELEASE_VERSION=7.7.1 export ATLAS_DB_AREA=${DBRELEASE_INSTALLDIR} export DBRELEASE_OVERRIDE=${DBRELEASE_VERSION} 3) setup athena (kit/release) environment

 

-- AndrzejOlszewski - 16 Mar 2009 \ No newline at end of file

Revision 42009-11-19 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet

Line: 77 to 77
  PBS commands can be executed on UI and WN nodes, but jobs are submitted only from UI
Changed:
<
<
Queues on EGEE: qstat -q (qstat -q @ce)
>
>
Queues on EGEE: qstat -q
  regular, local queues are: l_short, l_long, l_infinite, l_prio queues for Grid VO users: atlas, biomed, alice, ...

Job submition and management is done from user interaface machine: ui.cyf-kr.edu.pl qsub -q queue_name job_script

Changed:
<
<
qsub -q queue_name l_infinite@ce job_script
>
>
qsub -q queue_name l_infinite job_script
  Interactive work, starting a new session on WN:
Changed:
<
<
qsub -I -q l_infinite (qsub -I -q l_infinite@ce)
>
>
qsub -I -q l_infinite
  If this works too slow, one can try using "-q l_prio" If this doesn't help, one can connect by ssh. But first check which nodes are free:

Revision 32009-09-25 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet

Line: 6 to 6
 

Changed:
<
<
Machines
>
>
Computing Machines
  Ganglia monitoring: http://zeus21.cyf-kr.edu.pl/ganglia

Clusters available for Atlas users:

Line: 44 to 44
  Available for SL4, to be used only on WN machines Setup: source /software/atlas/ddm/latest/setup.sh ignore warnings about site name mismatch or use "-L ROAMING" option
Added:
>
>
Frontier Database access

Athena in release >=15.4.0 can now access information from database (geometry, conditions, etc.) using Frontier/Squid web caching. A Frontier server, with integrated Squid proxy, is installed at GridKa, at: http://atlassq1-fzk.gridka.de:8021/fzk Tier2 Squid (test instance) is installed also at CYFRONET-LCG2, using it may help with longer network latencies. http://atlas.grid.cyf-kr.edu.pl:3128 Default setup is contained in $VO_ATLAS_SW_DIR/local/setup.sh

see also: Proposal for DBase access method: https://twiki.cern.ch/twiki/bin/view/Atlas/RemoteConditionsDataAccess RACF Frontier pages: https://www.racf.bnl.gov/docs/services/frontier

 

Revision 22009-08-25 - AndrzejOlszewski

Line: 1 to 1
 
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet

Line: 43 to 43
  Installed in Atlas software area, at: /software/atlas/ddm Available for SL4, to be used only on WN machines Setup: source /software/atlas/ddm/latest/setup.sh
Deleted:
<
<
setup Athena for python libraries
  ignore warnings about site name mismatch or use "-L ROAMING" option
Deleted:
<
<
DQ2 end user tools are described at https://twiki.cern.ch/twiki/bin/view/Atlas/UsingDQ2 Installed at: /dq2_user_client Setup: source /dq2_user_client/setup.sh
 

Revision 12009-03-16 - AndrzejOlszewski

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="CyfronetWeb"

Atlas Computing Resources at Cyfronet

Machines Storage Job manangement Atlas software

Machines
     Ganglia monitoring:  http://zeus21.cyf-kr.edu.pl/ganglia

Clusters available for Atlas users:
EEGE (LCG) CE: ce.cyf-kr.edu.pl:2119/jobmanager-pbs-atlas Total of 2396 WN cores Scientific Linux 4.5, x86_64, gLite 3.1+32-bit compatibility libraries+python32 User interface machines: ui.cyf-kr.edu.pl

Storage

         Storage:  
              mounted on UI and all EGEE WNs, access for regular users
                  /home/people 
              mounted on EGEE WNs only, access for grid users
                  /home/grid - home for grid users
                  /software - experiment software
            
             Large Sun Thumper disk arrays, total currently ~240 TB connected
             Access through SRM interface at dpm.cyf-kr.edu.pl
             srm://dpm.cyf-kr.edu.pl:8443/dpm/cyf-kr.edu.pl/home/atlas

             Internal IDE disks on WN
                access to local /home and /tmp directories, but only for temporary use during job execution

             More info on access to data on storage at Cyfronet

Atlas Data Access Tools

             DQ2 client commands, described at: https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2Clients
             Installed in Atlas software area, at: /software/atlas/ddm
             Available for SL4, to be used only on WN machines
             Setup: source /software/atlas/ddm/latest/setup.sh
                        setup Athena for python libraries
                        ignore warnings about site name mismatch or use "-L ROAMING" option

             DQ2 end user tools are described at https://twiki.cern.ch/twiki/bin/view/Atlas/UsingDQ2
             Installed at: /dq2_user_client
             Setup:
                    source /dq2_user_client/setup.sh

Job Management

         Portable Batch System (PBS) (unsupported OpenPBS)
            Batch job submition and workload management system for a Linux cluster.
            More info on man pages: man pbs ...
                                 User guides on other sites: http://www.doesciencegrid.org/public/pbs/homepage.html
                                                                             http://hpc.sissa.it/pbs/
                                 mini HowTo:                      http://dcwww.camp.dtu.dk/pbs.html
                                 Linux Magazine article:     http://www.linux-mag.com/2002-10/extreme_01.html

             PBS commands can be executed on UI and WN nodes, but jobs are submitted only from UI

             Queues on EGEE: qstat -q (qstat -q @ce)
                                                  regular, local queues are: l_short, l_long, l_infinite, l_prio
                                                  queues for Grid VO users: atlas, biomed, alice, ...

              Job submition and management is done from user interaface machine: ui.cyf-kr.edu.pl
                 qsub -q queue_name job_script  
                 qsub -q queue_name l_infinite@ce job_script

              Interactive work, starting a new session on WN:
                 qsub -I -q l_infinite (qsub -I -q l_infinite@ce)

              If this works too slow, one can try using "-q l_prio"
              If this doesn't help, one can connect by ssh. But first check which nodes are free:
              pbsnodes -a | grep -B 1 "state = free

Atlas Software

       Grid production installation:       
           Cluster ce.cyf-kr.edu.pl
     logical catalog: /software/atlas/software/${release
     physical catalog: /software/atlas/prod/releases/rel*

To check releases currently available see: https://atlas-install.roma1.infn.it/atlas_install/
To lock release at the Cyfronet site go to: https://atlas-install.roma1.infn.it/atlas_install/protected/pin.php

For athena setup see:  https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBook

-- AndrzejOlszewski - 16 Mar 2009

 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback