8576
Comment:
|
← Revision 17 as of 2008-10-30 11:40:14 ⇥
7563
converted to 1.6 markup
|
Deletions are marked like this. | Additions are marked like this. |
Line 3: | Line 3: |
* your account should be able to run athena "hello world" as discribed in the workbook ["https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookRunAthenaHelloWorld"] | * your account should be able to run athena "hello world" as discribed in the workbook [[https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookRunAthenaHelloWorld]] |
Line 65: | Line 65: |
setup on atlhltX (X=2,3,4) | setup at DESY Zeuthen |
Line 67: | Line 67: |
* first you have to install ganga * login into atlhltX and change to /usr1/scratch/userscratch/USERNAME * you can download the ganga install script here * or copy into your disk with > cp /afs/cern.ch/user/v/volkmann/public/ganga-install . * install 4.2.7 with options > python ganga-install --prefix=/usr1/scratch/userscratch/USERNAME/Ganga/ --extern=GangaAtlas,GangaGUI,GangaPlotter * the install script shows the file to start ganga * your account should be able to run athena "hello world" |
|
Line 84: | Line 71: |
> source /usr1/scratch/userscratch/USERNAME/cmthome/setup.sh -tag=12.0.3 |
|
Line 87: | Line 72: |
> source /afs/desy.de/group/grid/UI/GLITE-3_0_4/etc/profile.d/grid_env.sh |
|
Line 103: | Line 86: |
* uncomment the line "MyProxyServer = myproxy.cern.ch" in the ganga config file "~/.gangarc" * start ganga for atlas with graphical user interface |
* start ganga |
Line 106: | Line 88: |
> /usr1/scratch/userscratch/USERNAME/Ganga/install/slc3_gcc323/4.2.7/bin/ganga --config-path=GangaAtlas/Atlas.ini --gui | > ini ganga |
Line 108: | Line 90: |
* open anther shell in atlhltX, start athena, grid, dq2, change into run directory (as discribed before) | > ganga --config-path=GangaAtlas/Atlas.ini --gui |
- setup on lxplus
your account should be able to run athena "hello world" as discribed in the workbook https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookRunAthenaHelloWorld
- login to lxplus
- start athena
> source ~/cmthome/setup.sh -tag=12.0.4
- start the grid
> source /afs/cern.ch/project/gd/LCG-share/sl3/etc/profile.d/grid_env.sh
> source /afs/cern.ch/project/gd/LCG-share/sl3/etc/profile.d/grid_env.sh
> export LFC_HOST=lfc-atlas.cern.ch
> export LCG_CATALOG_TYPE=lfc
> voms-proxy-init
- create a long term proxy (recommended for mc production, default 1 week)
> myproxy-init
- check with
> myproxy-info
- the long term proxy lies on "myproxy.cern.ch" and is automatically used by ganga
- start dq2 (dataset mangagement)
> source /afs/usatlas.bnl.gov/Grid/Don-Quijote/dq2_user_client/setup.sh.CERN
- start ganga (4.2.7) as atlas user in graphical user interface with
> /afs/cern.ch/sw/ganga/install/slc3_gcc323/4.2.7/bin/ganga --config-path=GangaAtlas/Atlas.ini --gui
> /afs/cern.ch/sw/ganga/install/slc3_gcc323/4.2.7/bin/ganga --config-path=GangaAtlas/Atlas.ini --gui
- type "y" to create a default ganga config file ".gangarc"
- ganga should start
- open a second shell, and type
> source ~/cmthome/setup.sh -tag=12.0.4
> source /afs/cern.ch/project/gd/LCG-share/sl3/etc/profile.d/grid_env.sh
> source /afs/cern.ch/project/gd/LCG-share/sl3/etc/profile.d/grid_env.sh
> export LFC_HOST=lfc-atlas.cern.ch
> export LCG_CATALOG_TYPE=lfc
> voms-proxy-init
> source /afs/usatlas.bnl.gov/Grid/Don-Quijote/dq2_user_client/setup.sh.CERN
> source /afs/usatlas.bnl.gov/Grid/Don-Quijote/dq2_user_client/setup.sh.CERN
- change into a run directotory
> cd testarea/12.0.4/PhysicsAnalysis/AnalysisCommon/UserAnalysis/run
setup at DESY Zeuthen
- start athena
> source /usr1/scratch/userscratch/USERNAME/cmthome/setup.sh -tag=12.0.3
- start the grid
> source /afs/desy.de/group/grid/UI/GLITE-3_0_4/etc/profile.d/grid_env.sh
> export LFC_HOST=lfc-atlas.cern.ch
> export LCG_CATALOG_TYPE=lfc
> voms-proxy-init
- log with another shell into lxplus and create a long term proxy with "myproxy-init" as described above (myproxy-init at atlhltX fails)
- start dq2
> source /afs/ifh.de/project/grid/dq2/setup.sh
- start ganga
> ini ganga
> ganga --config-path=GangaAtlas/Atlas.ini --gui
hello world with Ganga in graphical user interface
you should contain "HelloWorldOptions.py" in your run directory
- click on "new" in the ganga window to create a new job
- click on application and change from "Executable" to "Athena" in the drop down box on the right side
- expand the application menu
click on "option file" and browse on your disk to "HelloWorldOptions.py" to load it into ganga
- click on "max events" and write "10"
- click on "backend" and change from "local" to "LCG"
- click on "submit" button
- inspect the status of the job from "submitted", "running" to "completed", hopefully not to "failed"
- retrieve the output by clicking on the job with the right button and choose "retrieve output", click on the file you want to read
- you can also inspect input/output directly in ~/gangadir/workspace/...
generating events (1. step in the chain)
- the normal way would be to download in the second shell the atlas production transform archive, actual releases can be found here
- by default you have to create minimum 5000 events, but for educational test and to save time, we create 3 events,
- creating more than 3 is simple and strait forward (but takes longer time)
- you can change the default of 5000 events as discribed here
- to save time download a file already changed down to 3 events (default is only changed for 1 example: "DC3.005711.PythiaB_bbmu6mu4X.py")
> cp "/afs/cern.ch/user/v/volkmann/public/AtlasProduction_12_0_31_8_noarch.tar.gz" .
- click on new in the ganga window
- change "application" to "AthenaMC"
- type into atlas release "12.0.31"
- type into eventgen_job_option "DC3.005711.PythiaB_bbmu6mu4X.py"
- type into firstevent "1"
- make sure that mode is "evgen"
- type into number_events_job "3"
- type into process_name something meaningful like "bbmu6mu4X"
- type into production_name another meaningful thing like "bphys"
- change the random seed to a 9 digit number
- type in a 6-digit run number like "000001"
browse the transform script "AtlasProduction_12_0_31_8_noarch.tar.gz" into ganga
- type into transform_script "csc_evgen_trf.py"
- click on backend and choose "LCG"
- expand LCG and choose one of the following computing elements
** tbn20.nikhef.nl:2119/jobmanager-pbs-qlong
** lcg-ce0.ifh.de:2119/jobmanager-lcgpbs-atlas
** grid-ce1.desy.de:2119/jobmanager-lcgpbs-atlas
- (you find all by typing "lcg-infosites --vo atlas ce" into the shell)
- click on outputdata and select "DQOutputDataset"
- click on splitter and choose "AthenaMCSplitterJob"
- click on numsubjobs and type "3"
- click on nsubjobs_inputfile and type also "3"
- submit the job
- inspect the output in the other open shell with dq2 if the 3 jobs reach "completed" (sometimes dq2 doesn't work at atlhltX)
- you find the output datasetname in ganga
> dq2_ls -g datasetname
- you should see the three root files (each has 1 event in this example)
simulating and digitizing events (2. and 3. step in the chain)
- create a new job, and choose again "AthenaMC" in the application form
- type into the form the atlas_release, prozess_name, production_name and run_number of the previous generation job
- type into first_event "1"
- type into number_events_job "1"
- choose a 9-digit random seed
browse the job transform archieve again into the job "AtlasProduction_12_0_31_8_noarch.tar.gz"
- type into transform_skript "csc_simul_trf.py"
- choose in backend "LCG" and type a ce into it (as previously mentioned)
- click on inputdata and choose "DQ2OutputDataset" and expand the menu
- type into datasetname the name of the dataset from the previous job
- click on outputdata and choose "DQ2OutputDataset"
- choose in Splitter "AthenaMCSplitterJob" and expand
- type into n_subjobs_inputfile "1"
- type into number_input_jobs "3"
- type into numsubjobs "3"
- submit the job and inspect the result with dq2_ls -g datasetname
reconstructing events (4. step in the chain)
- everything remais the same to simulation exept:
- change mode to "recon"
- type into transform_skript "csc_reco_trf.py"
- type into the inputdata the corresponding name of your simulation dataset
- check the output dataset with "dq2_ls -g datasetname"
- download the dataset with "dq2_get datasetname" and inspect with root
links