Quantcast
Channel: PerkinElmer Informatics Support Forum - Columbus
Viewing all 139 articles
Browse latest View live

Testing the REST API token authentication via command line

$
0
0

If you have command line access to the Linux back-end then the token authentication can quickly be tested for any Columbus user account using the curl tool.

**INSERT the unique username and password where specified.

[columbus@columbus]$ curl -i -X POST -H 'Content-Type: application/json' -d '{"userName": "INSERT", "password": "INSERT", "urls": ["/api/1.1/images/*"]}' http://columbus/api/1.1/authentication/token

Which should yield an output like this:

HTTP/1.1 201 CREATED
Server: nginx/1.0.8
Date: Tue, 02 Aug 2016 13:36:32 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Language
Content-Language: en-us

{"token": "a436ac1b-5965-4ad9-abf9-487f1b6031e0", "urls": ["/api/1.1/images/*"]}

The authentication token can be used to test image retrieval. The example below requests image with ID 2801 and saves it to the local /tmp directory on the Columbus server.

Note: The measurement containing image with ID 2801 must first be published via the Columbus web interface in order for the images to be accessible to the API.

[columbus@columbus]$ curl -H 'PKI-Columbus-Authentication: a436ac1b-5965-4ad9-abf9-487f1b6031e0' http://columbus/api/1.1/images/2801.tiff -o /tmp/2801.tiff

If the authentication fails, check the /var/log/columbus/web/columbus.log for details, or contact the Informatics Support team using informatics.support@perkinelmer.com for further assistance.


Redirecting columbus and acapella temp data

$
0
0

The changes suggested below will only become active once the Columbus service is restarted.

[columbus@columbus]$ sudo /etc/init.d/columbus restart



=========================================
acapella user
=========================================

Acapella uses the value of environment variable TMP. Only if this does not exist, the /tmp directory is used. You can specify the TMP environment variable in the shell script which starts Acapella. For example, modify...

/usr/local/PerkinElmerCTG/Acapella*/bin/acapella.sh

...and add the TMP setting near the end of the file, before the "exec $BINDIR/acapella "$@"" line:

...
export TMP=/path/to/new/tmp/folder
exec $BINDIR/acapella "$@"
...

Please make sure the specified path exists and is writable for the user "acapella".


=========================================
columbus user
=========================================

Data bound for /tmp can be redirected by placing the following lines at the top of the /etc/init.d/columbus script...

# redirect tmp data
TMPDIR=/path/to/new/tmp/folder
TMP=$TMPDIR
TEMP=$TMPDIR
export TMPDIR TMP TEMP


Please make sure the specified path exists and is writable for the user "columbus".

Suggestion for Background Job Status page

$
0
0

A small, but I think useful new feature to the Background Job Status page would be to add a filter button for the Status column. The use case is wanting to know (only!) which jobs are currently running, or still Scheduled, etc. I often just do a browser Find > "Running" to see which jobs are not done yet. Especially in a multiple user environment, there can be many jobs Completed or in various stages of completion.

An additional useful filter would be by User Name.

Perhaps even better, a reposting mechanism, even simply email, that would send upon a whole Session ID being complete.

Thanks,
David

Listing installed/available packages on SLES and RedHat

$
0
0

If you wanted to know which version of Columbus or more specifically which Columbus packages are currently installed you could run the following command:

$ rpm -qa | grep Columbus

A typical output for the Columbus 2.7.1 installation would be as shown below.


[root@BigRed ~]# rpm -qa | grep Columbus
Columbus-db-2.7.1.131446-gen.x86_64
Columbus-downloads-2.7.1.131446-gen.noarch
Columbus-webservice-2.7.1.131446-1.rhel6.x86_64
Columbus-nginx-1.0.8-2.rhel6.x86_64
Columbus-webapp-2.7.1.131446-1.rhel6.x86_64
Columbus-omero-2.7.1.131446-gen.x86_64
Columbus-2.7.1.131446-gen.x86_64
[root@BigRed ~]#

-----------------------

RAH


Can I restart Columbus services independently?

$
0
0

The Columbus application is divided between 4 main services as shown below:

[root@BigRed ~]# /etc/init.d/columbus status
* Columbus status
Database: up [ OK ]
Acc: up [ OK ]
Web: down [WARNING]
Celery: up [ OK ]

The Database component which is effectively the data repository split between the pgsql db and /OMERO data/pixel repository.

The Acc component or ''Acapella'' which is the data import/export/analysis engine

The Web component which deals with the Web UI and associated connections/services e.g. Nginx

The Celery component which is a task queing service/system for image rendering/exports to 3rd party apps such as High Content Profiler.

If a single component is showing a warning then it is possible to start/stop/restart that individual service (requires root privileges).

In the example shown above the web component 'down' with a 'WARNING' status. Rather than restarting all services using the generic columbus script found in /etc/init.d/, which might kill, for example, analysis/import/export jobs unecessarily you may want to isolate the offending service and restart it independently, the same columbus script found in /etc/init.d will allow you do that.

e.g.

$ /etc/init.d/columbus start web

[root@BigRed ~]# /etc/init.d/columbus start web
* Starting Columbus
Web: [ OK ]
[root@BigRed ~]#

-----------------------------

RAH

How do I list the Columbus databases running under postgres?

$
0
0

Switch to the postgres or columbus user:

$ su postgres

or

$ su columbus

Run the following command:

$ psql –l

Both the columbus-webapp and omero4_4 should appear in the list (Columbus 2.4 and above).

Can I list all processes associated with Columbus?

$
0
0

The command to use for this purpose would be as follows:

$ ps -fU columbus

This will list all processes associated with Columbus along with the relevant PID's.

Can results be auto-published in Columbus?

$
0
0

After generating results for a plate, results need to be published to be accessible by Spotfire/High Content Profiler. Is there a way to auto-publish results, or set this as the default behavior, to skip the manual 'publish' step?


Columbus Helper/IE issue

$
0
0

Download and install the Columbus Helper application

Download the connection file which is used to establish the connection between the Columbus Helper app already installed on the client PC and the upstream Columbus server.

If after downloading the Columbus connection file you click the 'Open' option but nothing happens try the following:

- Launch the Internet Explorer web browser

- go to the 'Tools' menu > Internet Options > Advanced

- click the 'Reset' button to revert the internet Explorer settings to default

Now download the Columbus Helper connection file and attempt to 'Open' it again.

RH

Importing Image Data to Columbus using Command Line

$
0
0

Image data can be imported to Columbus using the following import.script:

https://perkinelmer.box.com/v/importscript

Download the above script and move it to the server using a suitable application such as WinSCP

https://winscp.net/eng/download.php

The image data to be imported must be available on a mounted file system on the server. Essentially you can either copy the file/s to the server or you could mount a share to the server with the files on it. Perhaps one of the more common shares would be a Windows share to the Linux system, in which case please see the following technical note:

https://access.redhat.com/solutions/448263

In the command line below the path to the image files appears as Image_Data_Folder

To check for suitable import types please refer to the "Import Type" menu, you will find this in the Import View in Columbus. In the command line below this appears as Import_Type

The image import is executed by command line as follows:

$ acapella -s User=Columbus_User_Name -s Password=Columbus_User_Password -s Host=Columbus_IP_Address -s ImportType=Import_Type -s DatasetFolder=Image_Data_Folder -s ScreenName=Columbus_Screen_Name import.script

For instance in the following example an ArrayScan TIF, in the path /home/columbus/ImportTest, will be imported to screen name Test using the columbus user.

$ acapella -s User="columbus" -s Password="columbus" -s Host="localhost" -s ImportType="ArrayScan TIF" -s DatasetFolder=/home/columbus/ImportTest -s ScreenName="Test" import.script

Columbus Helper is reported as damaged when executed on MacOS

$
0
0

Symptoms:
After installing the Helper Application, MacOS reports the following error when attempting to execute the Helper connection file:



Cause:
The MacOS security settings prevent applications downloaded from unknown developers from being executed.

Solution:
To avoid this message, visit 'System Preferences' -> 'Security & Privacy' -> 'General' and select “Allow applications downloaded from: Anywhere”.

Note that the “Allow applications downloaded from: Anywhere” setting is hidden by default on later versions of MacOS, but can be activated using the following workflow:

1. Open the Terminal app from the /Applications/Utilities/ folder and then enter the following command syntax:

sudo spctl --master-disable

Hit return and authenticate with an admin password

2. Launch System Preferences -> 'Security & Privacy' -> 'General'

3. The 'Anywhere' option under 'Allow apps downloaded from:' should now be available

Columbus db fails to initialise

$
0
0

There are a number of reasons why the db component may not intialise so the Blitz-0.log is the first place to look for clues as to why.

In this post we are interested in a specific root cause - a corruption in the Lucene Index.

On the Columbus server the log is located here:

/var/log/columbus/db/Blitz-0.log

Search the log for entries relating to 'Lucene'.

If you see an error entry in the log which reads:

Caused by: org.hibernate.search.SearchException: Unable to open Lucene IndexReader
at org.hibernate.search.reader.SharingBufferReaderProvider.createReader(SharingBufferReaderProvider.java:96)
at org.hibernate.search.reader.SharingBufferReaderProvider.initialize(SharingBufferReaderProvider.java:73)
at org.hibernate.search.reader.ReaderProviderFactory.createReaderProvider(ReaderProviderFactory.java:64)
at org.hibernate.search.impl.SearchFactoryImpl.(SearchFactoryImpl.java:130)
at org.hibernate.search.event.ContextHolder.getOrBuildSearchFactory(ContextHolder.java:30)
at org.hibernate.search.event.FullTextIndexEventListener.initialize(FullTextIndexEventListener.java:79)
at org.hibernate.event.EventListeners$1.processListener(EventListeners.java:198)
at org.hibernate.event.EventListeners.processListeners(EventListeners.java:181)
at org.hibernate.event.EventListeners.initializeListeners(EventListeners.java:194)
... 76 more
Caused by: java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:311)
at org.apache.lucene.index.FieldInfos.(FieldInfos.java:60)
at org.apache.lucene.index.SegmentReader.initialize(SegmentReader.java:341)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:306)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:228)
at org.apache.lucene.index.MultiSegmentReader.(MultiSegmentReader.java:55)
at org.apache.lucene.index.ReadOnlyMultiSegmentReader.(ReadOnlyMultiSegmentReader.java:27)
at org.apache.lucene.index.DirectoryIndexReader$1.doBody(DirectoryIndexReader.java:102)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:653)
at org.apache.lucene.index.DirectoryIndexReader.open(DirectoryIndexReader.java:115)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:316)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:237)
at org.hibernate.search.reader.SharingBufferReaderProvider.readerFactory(SharingBufferReaderProvider.java:146)
at org.hibernate.search.reader.SharingBufferReaderProvider$PerDirectoryLatestReader.(SharingBufferReaderProvider.java:220)
at org.hibernate.search.reader.SharingBufferReaderProvider.createReader(SharingBufferReaderProvider.java:91)

The likelihood is that there is a corruption in the Lucene Index.

To resolve the issue it is necessary to replace the existing /OMERO/OMERO4_4/FullText directory so that a new Index can be created.

Stop the Columbus services:

$ sudo /etc/init.d/columbus stop

Change the name and make a backup of the existing/original FullText directory using the 'mv' command:

$ mv /OMERO/OMERO4_4/FullText OMERO/OMERO4_4/FullText.bak

Restart the Columbus services:

$ sudo /etc/init.d/columbus start

Check the status to make sure all services are up.

$ sudo /etc/init.d/columbus status

If the db component syuccessfully starts then a new FullText directory should have been created in /OMERO/OMERO4_4/

RAH

How can I check the Columbus version via command line?

$
0
0
Connect to Columbus via SSH. Issue the following command: $ rpm -qa | grep -i columbus-2 example output: [root@BigRed ~]# rpm -qa | grep -i columbus-2 Columbus-2.8.3.144770-gen.x86_64 The Columbus version installed in this case is 2.8.3 RAH

Command to check the rpms installed with Columbus

$
0
0
Connect to the Columbus server via SSH. Issue the following command: $ rpm -qa | grep -E 'Columbus|Acapella' Example output: [root@BigRed ~]# rpm -qa | grep -E 'Columbus|Acapella' Acapella-server-2.8.3.144770-1.rhel6.noarch Columbus-nginx-1.0.8-2.rhel6.x86_64 Columbus-omero-2.8.3.144770-gen.x86_64 Columbus-webservice-2.8.3.144770-1.rhel6.x86_64 Columbus-2.8.3.144770-gen.x86_64 Columbus-db-2.8.3.144770-gen.x86_64 Acapella-columbus-webapp-2.8.3.144770-1.gen.noarch Columbus-downloads-2.8.3.144770-gen.noarch Columbus-webapp-2.8.3.1266-1.rhel6.x86_64 Acapella-4.1.3.121679-1.rhel6.x86_64 RAH

Local Import with Columbus

$
0
0

A local import though Columbus accesses data through the server's Linux file system. Whereas a client import uses the Helper to transfer data from the source to the server via the client.

A local Import can offer a distinct advange especially in the case of data accessed via a network share. Essentially it involves fewer network hops getting the data to the server, thus quicker and less to go awry. There is also no requirement for the client to remain connected via the Helper or otherwise after local import has been initiated.

The image data to be imported must be available through the Server's Linux file system. You can either copy the file/s to the server or more practically you can mount the network share to the server. Perhaps one of the more common shares would be a Windows share to the Linux system, in which case please see the following technical note which you might find useful:

https://access.redhat.com/solutions/448263

The "columbus" user account on the server must have read access to this share in order to perform the local import and read the files, please bear this in mind when setting up the mount. After setup on the Linux system:

- Log into the Columbus web interface and go to the Import view

- The Helper needs to be installed and a connection file downloaded as is typical but this is merely to access the Import view

- Select the Import Type

- Remove any current Source Folder path starting client://

- Enter the Soure Folder path

Essentially this is the path to the files on the server starting with a forward slash /

You can also use the path starting Local:// or local:// (this was a requirement in legacy versions of Columbus).

A caveat for using a local import is that it is not possible to browse the server's file system through the Columbus web interface. Whist this sounds like a bit of a hurdle providing you know the name of the mount point and have the mapped drive available for reference you can easily derive the path to the files. Essentially it would start with /, the name of the mount point, followed by the path to the files on the file share. For instance if the mount point is called fileshare and the data is available in the directory path /data/dataset/ the full path would be /fileshare/data/dataset/

- Choose Import Mode (default Normal)

- Select a Naming Preference

- Start the import


What packages are installed with Columbus 2.8.2?

$
0
0
The following command will list the packages that are installed with Columbus: $ rpm -qa | grep -E 'Columbus|Acapella' The output for Columbus 2.8.2 would look like: $ rpm -qa | grep -E 'Columbus|Acapella' Columbus-nginx-1.0.8-2.rhel6.x86_64 Acapella-server-2.8.2.143357-1.rhel6.noarch Columbus-db-2.8.2.143357-gen.x86_64 Acapella-columbus-webapp-2.8.2.143357-1.gen.noarch Columbus-downloads-2.8.2.143357-gen.noarch Columbus-omero-2.8.2.143357-gen.x86_64 Columbus-webservice-2.8.2.143357-1.rhel6.x86_64 Columbus-2.8.2.143357-gen.x86_64 Acapella-4.1.3.121679-1.rhel6.x86_64 Columbus-webapp-2.8.2.1205-1.rhel6.x86_64 RAH

How do I backup the columbus_webapp db?

$
0
0
Columbus creates and utilizes 2 databases, the omero4_4 db and also the columbus_webapp db. Both are backed up by a script which runs under /etc/cron.daily and are, by default, stored in /OMERO/OMERO4_4/db_backup To view details of how to backup the omero4_4 db, click here To manually backup the columbus_webapp you must first switch to the 'columbus' user account and issue the following command: $ su columbus - $ pg_dump -v -Fc -f /OMERO/OMERO4_4/db_backup/columbus_webapp-TEST.pg_dump columbus_webapp pg_dump -v -Fc -f - the arguments used as part of the pg_dump process to create the backup /OMERO/OMERO4_4/db_backup/ - the location where you want the backup to be stored columbus_webapp-TEST.pg_dump - the name of the backup file columbus_webapp - the name of the db being backed up RAH

How do I delete the Columbus software packages and user data?

$
0
0

The workflow below details the process of removing the main components of a Columbus installation. Note that this will not revert the system to a vanilla installation. Those package dependencies which were provided by the operating system will remain, as will the user accounts which were generated by the Columbus installation scripts.

WARNING: This will erase ALL user data. After these steps have been performed it will only be possible to recover the data if you have an appropriate backup available.

Removing the Columbus packages

1) Connect to the Columbus server via PuTTY/Terminal

2) Stop the Columbus service

$ sudo /etc/init.d/columbus stop

3) Delete the Columbus file repository

$ sudo rm -rf /OMERO/OMERO4_4

4) Access the postgres user account

$ sudo su postgres -

5) Delete the omero and webapp databases

$ dropdb omero4_4

$ dropdb columbus_webapp

6) Exit the postgres user account

$ exit

7) List all installed Columbus and Acapella packages

$ rpm -qa | grep -E 'Acapella|Columbus'

8) Delete any Columbus/Acapella packages listed in the output of the command in step 7)

$ sudo rpm -e --nodeps

9) Check for any remaining packages:

$ sudo rpm -qa | grep -E 'Acapella|Columbus'

The output should be empty.

10) Remove the Columbus software repository from the /etc/yum.repos.d directory (RedHat Enterprise Linux), or the /etc/zypp/repos.d directory (SuSE Linux Enterprise Server).

The PDF copy of this technote is available for download, here:

https://perkinelmer.box.com/s/fe4yve96zaw5gdltzjsbzk27ks8tr13y

What are the columbus_webapp db dump files used for?

$
0
0
These are backups of the second psql database instance, columbus_webapp. It stores all data that Columbus needs in addition to what Omero holds. For example the tables stored under the columbus_webapp include login and authentication attempts for users connecting to Columbus from a 3rd party app via the webapp, celery queue information and also contains things like the publishing status, cluster job status and the remote references for the measurements that have been forwarded to Amazon S3 during import for the cluster functionality. It’s use/relevance depends on the whether or not you are using features like publishing or cluster computing. RAH

What is the Celery service used for?

$
0
0
Celery is a task queuing service. It's primary use is for managing image rendering and export jobs being submitted to the Columbus server via 3rd party applications e.g. Spotfire. The Celery system picks these jobs from the queue runs them asynchronously and when complete prepares a response which is then picked up by the webapp to return to a client or webpage. Celery runs as another service component, it is started and stopped via /etc/init.d/columbus which actually uses /etc/init.d/columbus-celeryd. If the celery service doesn't respond to the standard /etc/init.d/columbus script then you can call the columbus-celeryd script directly using: $ /etc/init.d/columbus-celeryd stop/start/restart/status The Celery service starts multiple worker nodes for accepting image rendering requests, the workers log information regarding those jobs to /var/log/columbus/web/columbus-images-service.log. RAH
Viewing all 139 articles
Browse latest View live