Coding and Using Pulse Programs

Assumed knowledge: Bruker and/or Varian Pulse Programming and setting up of experiments


In this tutorial we will show you how to run non-uniform sequences using Bruker Topspin 2.1, Topspin 3.0-2 and Agilent VNMRJ. The methods presented here work well with istHMS reconstruction of data. Other methods of acquiring non-uniform data may not be compatible with istHMS so we strongly recommend you follow the tutorials here. For example, the native Bruker method of non-uniform acquisition with topsin 3.0+ is designed to work with MDD reconstructions and seems to not work well with istHMS. Please follow the method below... We also strongly recommend using schedules that result from our scheduler programs.


Quick Links to:

Topspin 2.1 Acquisition

Topspin 3.0+ Acquisition

VNMRJ Acquisition


Topspin 2.1 Acquisition

Topspin 2.1 has no native method of acquiring non-uniform data, however pulse sequences can have the normal aquisition method co-opted to perform NUS. To alter your own sequences requires understanding Bruker pulse programming. However we can provide some standard experiments that require no programming skills to get running. This tutorial will explain how to change pulse sequences (can be skipped if too much detail for you) and then will also explain how to run them (similar to regular acquisiton with a few straight forward changes).

First, we will talk about how to deal with schedules so TS2.1 sequences can read them. Our example will be for an HNCO, so there will be 2 indirect, non-uniformly sampled dimensions. Schedule files will have two columns in them and will look something like this:

0     0
1     0
2     0
3     0
5     0
.     .
.     .
23    12
29    12
33    12
.     .
.     .
63    127 


In topspin 2.1 we need each column of numbers for each dimension to be in seperate files. In the end, these two lists will be read into the 'vclist' and 'vplist' variables that are part of the bruker acquisition environment. In this case the vclist will refer to the points in the CO dimension and the vplist with refer to the points in the N dimension. This is a standard protocol we try to maintain. When the standard Bruker sequence has a slow acquisition dimension (F1) this will always be matched with the vclist while the fast dimension (F2) will be matched with the vplist.

Within the actual pulse program, these lists of numbers will be refered to as t1list (CO dimension, slow dimension, F1) and t2list (N dimension, fast dimension, F2). These lists will be text files that you will have created using our schedule generators. We recommend names like "t1_HNCO_818_128" - here 818 refers to how many points are in the list and 128 refers to the maximum number. t1 refers to the fact that this list is for the slow (F1, CO) acquisition dimension. Thus, the second file, for the N dimension, will be called "t2_HNCO_818_64". Note, 64*128 = 8192, so 818 samples is about 10% of the entire matrix. We will explain how to get the two separate files from the two column file shortly.

Jump to "Running Pulse Sequences" to skip over how to program these sequences yourself or continue with:

Changing Pulse Sequences

Here we will detail how regular Bruker sequences can be modified to acquire NUS data. The changes here refer to the regular hncogp3d program. Similar changes can be made to other seqeucnes but I recommend you know what you are doing here. You can download the regular hncowg3d program here and our NUS version for topspin 2.1 here to compare.


Between the lines:

#include <Avance.incl>
#include <Grad.incl>
#include <Delay.incl>






define list<loopcounter> t1list=<$VCLIST>
define list<loopcounter> t2list=<$VPLIST>


This sets up two variables within the pulse program, a t1 list called t1list (CO dimension) and a t2 list called t2 (N dimension). These variables just hold the next number in the list from the sequence files mentioned above (t1_HNCO_818_128 and t2_HNC_818_64). Shortly, we will associate these files with the VCLIST and VPLIST under the 'eda' window in Tospin, but I will go into more detail later on that.

Next, after the calculation of the DELTA values in hncogp3d, we will be calculating our own values used in determining the evolution time for each point. I wont detail the nature of these calculations, you either understand this or you don't - just know they are used for constant time calculations in F2. Anyway we will add the following:



Next, before the 'aqseq 321' command, enter the following line:



This sets up the variable l10 which will count the number of times we need to loop through the pulse program. This number should equal the number of samples being acquired - for example 818. td1 and td2 refer to the numbers under the AcquPars window in the TD row for F2 and F1 columns. In these areas you will put '4' and '818' respectively (if your list is 818 lines long). In the end, this just means that l10 will equal 818, so the pulse program will know to loop through the sequence the same number of times as there are points in the sequence files.

Next I will show you the standard and the NUS programs side by side from the initial zeroing of data to the first 90 degree proton pulse:

Standard Program:                            NUS Program:

1 d11 ze                                     1 d11 ze
d11 pl16:f3 d11 pl16:f3
2 d11 do:f3 2 d11 do:f3
3 d1 pl1:f1 6m ... 3 3m ... 4 9m ... 5 d1 pl1:f1


Whats going on here? Normally, after the acquisition of a single scan, the pulse program loops back to point '2', we turn off f3 decoupling (dont want to fry probe!) and we start again with a high power 90 degree pulse on 1H. The Bruker programming language will look after the details of collecting hypercomplex points using macros at the bottom of theprogram that look like this:

Standard Program:

go=2 ph31 cpd3:f3
  d11 do:f3 mc #0 to 2
     F1PH(rd10 & rd29 & rd30 & ip4, id0)
     F2EA(igrad EA & ip6*2, id10 & id29 & dd30)


This will acquire complex data in F1 and Echo-Antiecho data in F2 for you automatically, looping back to point 2, NS times and also looping back to point 2 to collect the hypercomplex data (in this case 4 times for each point for a total number of scans of NS * 4). Because we are doing this non-uniformly, we have to take care of the hypercomplex data acquisition ourselves. I will show you how to do this soon. For now, we need the return-to-points at '3' and 4' above in the NUS program because we need points to explicitly return to. It may not be always necessary to have these extra loop-to points, but they don't hurt and they do help sometimes with clarity. Delays are added here because some consoles need time to catch up.

Now, after we set high power for f1 but before we do a pulse, we will calculate new evolution delay times. Note, this sequence uses constant time in F2 (Nitrogen) dimension so the caculation for the delay in F2 is a little tricky. But first, lets calculate the F1 delay, based on the numbers from the t1list.

We do so using the following:



Here, d0 (evolution time in F1) is being set to in0 (dwell time based on SW and relevant frequency information) times the number in the t1list. We add an addition 3u here because if the first number in t1list = 0 (which is does the first time through), d0 will equal 0 us which is not permitted. So, in the case that we have gone through the t1list 4 times, and the forth number in this list is 9, then d0 will equal in0 * 9 + 3u.

Next, we calculate the delay times during constant time evolution;



Again I won't explain this. But note that here we are using the t2list number and multiplying it by the in10 value (dwell time in F2). Now we have calculated values for d0 (regular evolution in F1), d10, d29 and d30 (constant time evolution in F2). This is normally taken care of automatically in a Bruker sequence but we have to do it manually.

The rest of the pulse sequence is the same, except now we have to alter the acquisition method at the bottom. We no longer use the convenient bruker macros to do this. Instead we will change the bottom part to look like this:

  go=2 ph31 cpd3:f3
;  d11 do:f3 mc #0 to 2
;     F1PH(rd10 & rd29 & rd30 & ip4, id0)
;     F2EA(igrad EA & ip6*2, id10 & id29 & dd30)

  d11 do:f3 wr #0 if #0 zd

  3m igrad EA
  3m ip6*2
  lo to 3 times 2

  3m ip4
  lo to 4 times 2

  3m rp4
  lo to 5 times l10


See how we have commented out the normal acquisition macros F1PH, F2EA etc but we retain the

go=2 ph31 cpd3:f3

line. We move past this line only when we have acquired as many scans as NS says we should. We have by now acquired only one of the 4 hypercomplex points for each point in the schedule. We need to set up the phasing for the other parts of the hypercomplex point. But first, we must turn off decoupling, so we issue the following line:

  d11 do:f3 wr #0 if #0 zd


Don't forget to do this. Its a safety thing. It also zeros the data for the next part of the hypercomplex point. Now, if you are acquiring trosy data you don't want to decouple during acquisition so maybe you don't need to turn decoupling off after acquisition. But, its not too bad to keep the whole line in anyway. Up to you.

Now the next few lines will set the phasing parameters for the F2 dimension. They are:

  3m igrad EA
  3m ip6*2
  lo to 3 times 2


Here we igrad EA and ip6 twice. You can see that that is what we need to do from the regular sequence line: F2EA(igrad EA & ip6*2, id10 & id29 & dd30). We then loop back to '3' and acquire another NS FIDs (this time we are collecting the anti-echo component). After this we collect the imaginery part of the F1 dimension. We set this up by using:

  3m ip4
  lo to 4 times 2


Again we know to increment phase 4 only once because of the line F1PH(rd10 & rd29 & rd30 & ip4, id0) in the regular sequence. We loop by going back to '4'. After looping through 4 times like this (4 data points for the one hypercomplex number) we are done for that point in the schedule. We follow this up with moving the t1list and t2list to the next number in their list with the commands:



We reset the phase of 4 with

3m rp4


and then loop to '5' and start collecting for the next point. We do this loop to 5, l10 times.

  lo to 5 times l10


That is we loop to '5' for each point in the schedule. And for each of these loops we loop 4 times to acquire the hypercomplex point. Nothing else needs to be altered. All other phases etc. should work. All we have done is taken the numbers in the schedule file to calculate appropriate delay times for frequency evolution and then forced the sequence to acquire our hypercomplex data with appropriate phasing for real/imaginery and/or echo/antiecho components.

This is a basic explaination for coding NUS sequences under Topspin 2.1. There may be other things to consider and we can't provide a comprehensive guide that will cover everything, but we hope this will be clear enough to guide experienced programers to do their own coding if they feel confident enough to do it.

Next, lets talk about actually...


Running Pulse Sequences


Once you have access to a Topspin 2.1 NUS pulse sequence, you can run it using the following setup. First, setup everything as though you were running the standard sequence. So in the case of running the hncogp3d experiment, rpar the HNCOGP3D experiment or use parameterset HNCOGP3D. This will automatically load the hncogp3d experiment into the PULPROG field under AcquPars. However, we will use our NUS program. Lets say we use this one. It has the name SAR_hncogp3d_TS21 (it is exactly the one we wrote above under "Changing Pulse Sequences". We simply need to put this program into the conventional place under topspin directory and then load it. This is usually in the directory



or you can put it into



Now use the buttons to load it into the PULPROG field.

Since these programs are modified from the standard Bruker sequences only in the way the data is collected and not in the fundamental way pulses are delivered down the probe, these sequences can be set up with 'getprosol' just like you would with the standard sequence. Of course you need correct pulses to be added to your prosol table first.

The next field that deviates from the standard method is the TD row. Here we will put in different variables for F2 and F1. The justification for these numbers is described above, otherwise just follow us here. First you will put in the number '4' for F2. Then you will put in the number of samples in your schedule for F1. Simply put, the reason you put in 4 here is because for 3D data you need to acquire 4 FIDs to define the hypercomplex point in the 2 indirect dimensions. Almost everything else you have in the AcquPars window will be the same as normal set up. You put in your sweep widths, recevier gain etc. just like a normal acquisition.


Now comes the tricky part. Scroll down to where it says "Lists" in the AcquPars window. Under this you will see fields for changing the VCLIST and and VPLIST. This fields will refer to a file which you need to put into the vc directory and the vp directory. You wil have two files names like "t1_HNCO_818_128" and "t2_HNCO_818_64". The first thing you need to do is put them into the right directory so Topspin can read them. The directories they need to go into are:





The vclist is used for t1 here and the vplist for t2. So its tempting to only put t1_HNCO_818_128 under the vc directory and t2_HNCO_818_64 under the vp directory. You should do this but also put each file in the other directory as well. This is a Bruker bug. Don't ask.... just follow....

Now that the files are in place you can load them under AcquPars window. Make sure VCLIST is the t1 list and VPLIST is the t2 list.

Side Note: How to make a 2D list with 2 columns into two files of one column?

We will use a little unix magic here. Lets say you have a file(called sched.2d) with two columns of numbers in it. You would get this two column list from one of our schedule generators. The first column refers to the t2 points (N dimension) and the second column refers to the t1 points (CO dimension). We can extract the first column with:

cat sched.2d | awk {'print $1'} > t2_HNCO_818_64


This creates the file "t2_HNCO_818_64" with only the first column in it... the t2 list (for N). Follow this up with

cat sched.2d | awk {'print $2'} > t1_HNCO_818_128


This will create the t1 list (for CO). These files should be moved to the appropriate place as described above.

With the VCLIST and VPLIST file names in place you are just about done. The rest is just a 'sanity check'. What I have described here is a 10% schedule. This means this sequence should take 1/10 the time to run. If you were to set up the same experiment conventionally with 64 complex points by 128 complex points and measure how long that experiment takes to run with the command "expt" then going back to the NUS setup and doing 'expt' should give you 1/10 the time. This is probably worth doing the first few times until you are comfortable with doing this. You should also get no other errors or problems. One problem may be having points in the N dimension that exceed the constant time period resulting in negative numbers for some delays. If you are stuck, drop me a line (Scott) and I will see what I can do.

Of course, to run the experiment, just type 'zg' and enjoy as the experiment acquires.

Last note: Before processing this spectrum you will need the schedule. When acquiring data like this the data directory contains only the vplist and vclist, but not the schedule with both columns in place. You can of course just copy your schedule file to the data directory and process there. But you can also recreate the 2-column schedule file from the vplist anf vclist. At the unix command line, simply type:

paste vplist vclist > schedfile


This will create the schedule file for you with the file name "schedfile"


Topspin 3.0+ Acquisition

Topspin 3.0 and 3.2 have Non-Uniform Sampling pulse sequences native and a mechanism for executing them which involves creating random schedules. This means we don't need to rewrite pulse programs or use vclist and vplists to utilize schedules. In fact you can simply use topspin 3.0+ and the schedules that topspin makes can be reconstructed with our software. However, we believe our Poisson Gap Sampling schedules are superior.

In order to get Poisson Gap Schedules working with Topspin 3.0+, we have implemented a Macro in conjuction with a small program that needs to be installed in your Topspin 3.0+ directory. You can simply request this software from Scott Robson by emailing:

I'll send you a tar file with details and the software. Just untar the file and read the README file for installation instructions. Then run the macro.

The macro will aks some very simple questions. The important things to know are:

1) Setup the experiment like you would normally for a linear data set.

2) Execute the macro with the command 'nusPGSv3' - version 3 is the current version. The macro will ask a few questions.

3) Random Seed: This is just a number that initialized a pseudo-random number generator. You can use '0' to get a random seed based on current time.

4) Sine Portion: This sets up how the random Poisson Gaps are spread out. Details are in the relevant papers. The defaul is '2' and you should probably just use '2' as well.

5) Number of points to collect: The default is 10% of the linear points you have already selected in the setup. This is good (and probably conservative) for most 3D spectra. So say you have asked for 64 points in one indirect dimension and 64 points in another indirect dimension. This corresponds to 32 comple points in each indirect dimension. Thats a total of 1024 hypercomplex points. 10% would be about 103 points. These 103 points will be selected by the macro based on a Possion Gap distribution and weighted by the Sine Portion above.

6) Tolerance: This allows you to adjust the range. So above, the program will attmept to find 103 points, but maybe it finds 104 points. A tolerance of 1% would accept this value. A tolerance ofup to 10% would accept values up to about 113 points.

7) Shuffle the Schedule: The scheduler can randomize the order of the pointsfor you. This may be useful if, for example, the pulse sequence has to be stopped early. When a random order is used, you will have points collected from all over the schedule, not just the beginning. This means you will have data with long evolution times in both dimensions already collected, enabling good reconstruction in both dimensions. Note that the first point (0,0 for example) is always collected first. The choice is yours. But not shuffling the schedule when you are doing this for the first few times might help in understanding what is happening. So I suggest you dont shuffle when learning the system. Shuffle once you are comfortable.

8) Now you are done. Just type zg and let it run.

9) You reconstruct the data using hmsIST like you would for any other spectra.