Main · Videos; The way to happiness dating site malawi dating club malawi dating club precise and accurate processor simulation dating precise and accurate. dimensional scheme of the FDTD method using multi-CPU and multi-GPU, respectively. In the first im- .. simulations, since single precision is accurate enough for FDTD dating process for the rest of the components follow the same scheme. notice, the title of the publication, and its date appear, and notice is given that copying is by .. ever seen a full LSE processor model before the experiment. Worse still, the arbiter's exact functionality depends on simulator state that does not.

### The uncertainty of the half-life - IOPscience

This can be solved by means of an alternative time analysing circuit in which a parent decay starts the timer and not a single but multiple delayed coincidences are added to the time spectrum, one time value for every recorded daughter decay over a long period of time [ 40 ].

- 1. Introduction
- Navigation menu

Using multiple delayed coincidences eliminates the spectral distortion effect, but at the price of an increased number of random coincidences. Time-interval distribution analysis method. Time-interval distribution analysis is a well-established method for measuring the activity of radiation sources [ 41 — 43 ]. For a simple decay, the probability density of the time interval between two successive disintegrations is given by the formula: For a large number of parent atoms N1, the probability of a given time interval between two successive disintegrations is given by: It shows that, at low time intervals, the shape of the curve has a term that depends not on the activity of the daughter nuclide, but on its half-life.

The time-interval spectrum is obtained by binning the time differences between all events. It is possible to focus on one of the transitions separately by selecting regions in the energy spectrum mainly belonging to the parent-daughter sequence of interest.

Time-interval distribution analysis yields comparable results to the delayed coincidence method [ 3843 ]. The half-life of the nuclide is usually derived from a least-squares fit of an exponential function to the measured time spectra. However, some complications have to be taken into account. Least squares fitting procedures imply the assignment of proper weighting factors to the stochastically distributed data involved. In the case of a Poisson distribution, the obvious choice of setting the weighting factor equal to the inverse of the measured value is prone to bias towards low values.

Alternatively, using the inverse of the fitted value may turn out to be biased towards higher values, depending on the procedure followed. Additional problems arise when also the possibility of zero counts has to be taken into account.

An overview of the problems and possible solutions can be found in [ 4445 ] and references therein. In fact, it is possible to perform unbiased least-squares fitting with Pearson's chi-square i. After each optimisation, it is set equal to the fit value Y obtained in the last iteration and a new fit is performed until convergence is reached [ 45 ].

There exist alternative estimators that allow using existing software for least-squares in a less biased way, with a minimal adaptation to the procedure. For example, one can minimise [ 45 ] freely without the need for an iterative procedure, and obtain an unbiased result if the data are not too close to zero.

To decay curves with a quasi purely exponential shape, another excellent method to derive an unbiased value for the decay constant is applying 'moment analysis'. It is known that the 'first moment' i. If the moment analysis of the decay curve is restricted to a time interval t1,t2the following relationship can be derived between the half-life and the first moment [ 45 ]: The first moment, which is not a central moment, is sensitive to the time value assigned to each time bin.

Central moments are insensitive to a linear translation of the time origin. The time spectrum is the convolution of the prompt peak the width of which shows the variation in the timing between truly coincident events and the slope generated by the lifetime of the nuclear level.

For relatively long half-lives, the width of the prompt peak is negligibly small and the half-life is extracted from a fit to the exponential slope. If the half-life is short compared to the time resolution of the detection system, the time spectrum resembles the prompt peak of which the centroid has been displaced.

The lifetime can be extracted from this shift [ 37 ]. The 'shift' method is less precise than the 'slope' method. Typical uncertainty components are: It is represented by the FWHM of the prompt time spectrum, which is generally a Gaussian distribution obtained by measuring simultaneous events in the two timing branches.

Contributors to the FWHM are variances due to scintillator, photomultiplier and time pickoff. Photomultipliers are matched to the scintillator for maximum spectral response at the required wave length, large quantum efficiency, short rise time, transit time and transit-time spread.

The development of ultrafast scintillators and multipliers have contributed most to improving the sensitivity of the fast timing technique down to the picosecond region.

### The SimJava Tutorial

These effects are considerably reduced by using constant fraction discrimination, resulting in a bipolar pulse of which the zero crossing is nearly independent of pulse height. This results in more precise time pickoff. Parent-daughter transitions should be correctly separated from random coincidences and interfering signals, while influences from the latter need to be accounted for in the uncertainty.

In spectra with low counting statistics and long slopes, the least-squares analysis underestimates the lifetime and proper fitting should be based on Poisson statistics [ 37 ]. Measurement of intermediate half-lives 4. Decay curve Many radionuclides with practical implications and applications have half-lives varying between seconds and about years. This is a range of half-lives that can be measured directly by repeated activity measurements of a source.

The simplicity of the measurement principle has incited many authors to publish thus obtained half-life data, unsuspecting of hidden processes that inflate the measurement errors far beyond their uncertainty estimates. A common scenario is that an exponential decay curve is fitted to various activity values measured as a function of time and that the uncertainty on the decay constant is obtained from the least-squares minimisation algorithm.

This procedure is often faulty [ 48 ], as real measurement data may deviate in a subtle but systematic way from the theoretically assumed decay curve. The fitted decay constant is the value leading to the smallest residuals, which explains why an erroneous result seldom raises suspicion in the mind of the experimenter [ 1415 ]. Also other methods can be applied to extract the half-life from the data, such as moment analysis section 3.

The sum of an exponential function superposed on a constant background is often assumed to model the temporal dependence of the measured activity: In reality, there is a less than perfectly linear relationship between the activity and the measured signal count rate or current in a detector.

Various processes make measurement data deviate from the ideal decay curve, long-term instabilities being at the same time the most influential but least visible sources of error. Due to differences in uncertainty propagation, it is convenient to subdivide these process according to the frequency at which they occur [ 1415 ]: A typical example would be counting statistics: Poisson processes are characterised by an exponential distribution of the interval times between successive events.

By extending the measurement, one improves the statistical uncertainty on the actual count rate. Also ultra-high frequency instabilities, such as e. They are partly cancelled, though, by the 'integrating' effect of the duration of the measurement. Normal statistical treatments including least squares fitting apply because of the preservation of randomness of the data. They include the so-called 'seasonal effects' e.

Running an infinite simulation may seem pointless but may be of some use if animation is used. In this case, the simulation's animation could be used as an elaborate demo of a system. This parameter specifies the point from which the termination condition should start to apply. If false the termination condition will start to apply only after steady state has been reached.

If true it will start to apply from the beginning of the simulation. We will now provide a termination condition for our example simulation. As you recall, we used a for loop in the source entity in order to terminate once events had been scheduled. As previously mentioned this method basically is used by entities to find out whether or not the termination condition has been reached.

After removing the for loop we will add a termination condition based on event completions at the processor. Once events have been processed in steady state the simulation will complete: Performing output analysis 8. What is output analysis? Markovian modelling techniques generate results by calculating and using the steady state probalility distribution. In the case of simulation the approach is algorithmic rather than analytic. This means that the simulation is executed and measurements are observed rather than calculated.

However a single simulation run produces only a single path through the sample state space. It it clear that to obtain a better, non-biased estimate of each desired measurement a simulation needs to be run several times, in order to obtain different paths.

It should be obvious that the samples used in each run must be different in order for the additional runs to be meaningful. The task of carrying out this process to obtain non-biased results is termed output analysis.

Up to this point we have only produced a single run with our simulation. From the defined measures we have been able to obtain sample measurements depending on the measures' type. When output analysis is used we obtain total measurements rather than sample measurements.

Furthermore since we have multiple sets of sample measurements for each measure rather than just a single one, we are able to estimate the accuracy of our results. To accomplish this the observations collected in each run are used and, for each measure, a confidence interval is produced. This confidence interval can tell us how accurate i.

For a modeller who needs to make informed decisions about a system based on the results of a simulation, output analysis is a necessity. SimJava provides the modeller with two methods for performing output analysis: Independent replications The concept of independent replications is to make several runs for a simulation rather than a single one. For each run to be different, the simulation's sample generators need to produce different sequences of samples.

This translates to providing different initial seeds in each replication. The method of independent replications is fairly simple to understand. Furthermore, each run is uncorrelated with the others since in each one, the generators used are seeded differently.

Since each replication is a simulation run in its own right, each replication will have its own transient and steady state period. The drawback with this method is that, in the case of steady state analysis, a part of each replication must be discarded. This increases the total time it takes to run the entire simulation. It is obvious that this problem does not persist in the case of transient analysis.

In each replication observations are collected which serve to produce the replication's mean one for each measure. Once all the replications have been carried out, the means are used to estimate a total mean, variance and standard deviation. Based on these, each measure's confidence interval is calculated and the mean's accuracy is estimated.

These results along with total maximum and minimum observations, as well as total exceedence proportions form the total measurements that are presented to the modeller. Batch means The other method provided for output analysis in SimJava is batch means.

The approach in this case is to make one large run as opposed to many smaller replications. The observations of this run are then placed into batches and for each batch its mean is calculated. The batch means in this case are used much like the replication means in the method of independent replications.

The benefit of the batch means method is that only one set of observations needs to be discarded and as such the time to run a simulation is decreased. However, since all the observations are made in a single run, the batch means obtained will be correlated. For example if a packet in a network experiences large waiting times, this probably means that subsequent packets will experience similar delays. Since observations are correlated the batch means will also be correlated.

This is why the observations need to be batched into a number of batches that is large enough to provide small serial correlation between successive batch means, but also small enough in order to have many estimates of each measure's mean. Once the best number of batches is found, the estimation of the means and calculation of confidence intervals proceeds as with the method of independent replications.

If this method is not used, no output analysis will be performed. Variations of this method exist that allow the modeller to choose between available methods, as well as parameters of each method such as the number of replications to perform or the confidence level with which to calculate the confidence intervals. In any case, once the desired method has been selected, its application is performed automatically.

The simulation's report will now contain the experiment's total, rather than sample, measurements. Concerning independent replications, the modeller must be aware of a subtle point.

The only problem that the modeller should be aware of is the re-initialisation of mutable objects and non-final static fields. If an object such as a Vector is used in an entity, it needs to be explicitly reset to its starting state by the modeller. A good place to reset these would be the bounds of the body method.

One way to use e. Since in each replication the entity is reset and its body method is executed, the Vector will return to its original state at the start of each replication. The following code would accomplish this: Another way to do this would be to add code to the end of the body method: An additional topic that should be addressed concerning independent replications is the process of reseeding the entities' sample generators.

This is fine as long as the original seeds are well spaced enough to accomodate all the replications. Concerning batch means the only comment that needs to be made is that if efficient measures are present within the simulation the method may not be used.

Finally, if batch means can be used, the number of batches used to batch the observations will be selected to produce the least serial correlation among the batch means for the majority of measures.

In our example simulation we will use independent replications to perform output analysis. The simulation's report is here. Variance reduction using output analysis In the section on termination conditions we mentioned a type of condition based on the accuracy of a confidence interval. In order to use such a condition it is necessary to specify an output analysis method to be used as a variance reduction technique.

If independent replications is selected, 5 initial replications are made. If the accuracy of the desired confidence interval is not good enough, an estimation is made on how many additional replications will be required. The simulation notifies the modeller accordingly and proceeds to perform the additional replications. If a very tight confidence interval is required it could be a good idea to specify some or all the simulation's measures as efficient.

This is the case because if detailed measures are used and many replications are made, the memory requirements will rise considerably. We also mentioned that batch means may also be used as a variance reduction technique.

In this case, an initial number of observations are collected and then batched into a large number of batches. The batch means are then used to calculate their serial correlation. If it is too high the modeller is informed and more observations are collected. Similarly, if the correlation is low but the confidence interval isn't tight enough, additional observations are collected. To use batch means as a variance reduction technique no efficient measures can be present in the simulation.

Finally, regardless of which output analysis method is selected for the termination condition, the condition applies only in steady state. Furthermore, the method selected will be used to produce confidence intervals for the rest of the simulation's measures as well.

No additional output analysis method may be provided. In our example simulation we could specify a termination condition based on the accuracy for the utilisation of disk 1: The simulation's report 9. What does the report contain? In the previous sections we have mentioned the simulation's report and provided samples of it for each simulation presented.

## The uncertainty of the half-life

At this point we will discuss the report in more detail. The report file contains all the information that the modeller could want from the measures he defined.

Apart from this however, the report contains general information about the experiment such as the time it took to complete and the conditions used. The general information that is contained in the report consists of: The version of SimJava used.

The date the simulation was run. The start and end time of the simulation. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Compilers also often lacked support, requiring programmers to resort to assembly language coding. SIMD on x86 had a slow start. The introduction of 3DNow! Apple Computer had somewhat more success, even though they entered the SIMD market later than the rest.

AltiVec offered a rich system and can be programmed using increasingly sophisticated compilers from MotorolaIBM and GNUtherefore assembly language programming is rarely needed. However, inApple computers moved to Intel x86 processors.

This can be used to exploit parallelism in certain algorithms even on hardware that does not support SIMD directly. Microsoft added SIMD to. The interface consists of two types: