Java Sound Card Oscilloscope

 Introduction

javaScope is a basic oscilloscope program using the PC sound card. The program was written in Java using the Eclipse IDE and the Windowbuilder plug-in. Its somewhat limited, with a useable analog bandwidth of about 10KHz, no adjustable trigger, and no amplitude gain control. There is however, an adjustable horizontal timebase, and a loop-back function to monitor the audio input.

 

 

1) The Java Sound API

 The Java sound API is a fairly comprehensive library used for controlling audio playback, audio capture, MIDI synthesis, and basic MIDI sequencing of the PC sound card. A good introductory tutorial can be found here: http://www.java-tips.org/java-se-tips/javax.sound/capturing-audio-with-java-sound-api.html. Another series of tutorials, including the loop-back example which got me started can be found here: http://www.jsresources.org/index.html.

Basically, for this application, the Sound API requires two “audio lines”, a Target Data Line which provides an internal buffer to receive the audio input and a Source Data Line which provides an internal buffer for the audio output. The Data lines are configured via the DataLine.Info class which contains the audio format (sample rate etc.) and the function of the line (input/output). Once the lines are configured, their operation can commence with the start() command.

The audio format is shown in the following code snippet:

private AudioFormat getFormat()
{
        int sampleSizeInBits = 16;
	int channels = 1;
	int frameSize = 2;
	int frameRate = (int)SAMPLE_RATE;
	boolean bigEndian = false;
	return new AudioFormat(AudioFormat.Encoding.PCM_SIGNED,
			       SAMPLE_RATE,
			       sampleSizeInBits,
			       channels,
			       frameSize,
			       frameRate,
			       bigEndian);
	 }

As shown the input and output audio data is in a 16 bit signed PCM format with a sample rate of 44100 samples/sec. Only one audio channel is open. The frame size, measured in bytes, is two, and the frames are ordered as Little Endian (LSbyte first).

The audio operation is contained in the captureAudio() procedure shown in the following code snippet:

private void captureAudio()
{
  try
  {
        // Set up audio input
        final AudioFormat format = getFormat();
        DataLine.Info inInfo = new DataLine.Info(TargetDataLine.class, format);	      
	inLine = (TargetDataLine) AudioSystem.getLine(inInfo);
	inLine.open(format, LINE_BUFF_SIZE);
	inLine.start();

	// Set up audio output
	DataLine.Info outInfo = new DataLine.Info(SourceDataLine.class, format);
	outLine = (SourceDataLine)AudioSystem.getLine(outInfo);
	outLine.open(format, LINE_BUFF_SIZE);
	outLine.start();

	Runnable runner = new Runnable()
	{
	   public void run()
	   {
	        byte inBuffer[] = new byte[READ_BUFF_SIZE];
		byte outBuffer[] = new byte[READ_BUFF_SIZE];
		int buffSize = inBuffer.length;

		while (running)
		{
		    int bytesAvailable = inLine.available();
		    if (bytesAvailable >= READ_BUFF_SIZE)
		    {
			int bytesRead = inLine.read(inBuffer, 0, buffSize);
			// Convert  to floating point
			for(int i = 0; i < buffSize; i += 2)
				dataBuff[i/2] = ((inBuffer[i] & 0xFF)|(inBuffer[i + 1] << 8)) / 32768.0F;

			// Convert back to PCM
			// This section converts from floating point back to signed PCM
			// just to prove that its possible to do a bit of processing in Java.
			if (chckbxLoop.isSelected())
			{
			   for (int i = 0; i < buffSize; i += 2)
			   {
				// Saturation
				float fSample = dataBuff[i/2];
				fSample = Math.min(1.0F, Math.max(-1.0F, fSample));
				// Scaling and conversion to integer
				int nSample = Math.round(fSample * 32767.0F);
				outBuffer[i+1] = (byte) ((nSample >> 8) & 0xFF);
				outBuffer[i] = (byte) (nSample & 0xFF);
			   }
			}
			else
			{
			   for (int i = 0; i < buffSize; i++)
			   {
			        outBuffer[i] = 0;
			   }
			}
			outLine.write(outBuffer, 0, bytesRead);
			canvas.repaint();							 
		    }
	       }
	       inLine.flush();
	       inLine.stop();
	       inLine.close();
	       outLine.flush();
	       outLine.stop();
	       outLine.close();
	   }				  
        };

	Thread captureThread = new Thread(runner);
	captureThread.start();
  } 
  catch (LineUnavailableException e)
  {
        System.err.println("Line unavailable: " + e);
	System.exit(-2);
  }
}

The captureAudio() procedure is called when the “Start” button on the GUI is pressed. As shown, the data lines are configured and started. Buffers for the input and output audio data are half the size of the data line buffers. This way, the audio data can be manipulated while the input data line buffer is being filled and the output buffer is being emptied. The audio data manipulation consists of converting the input PCM data to floating point and passing it to the display panel’s repaint() thread. The thread also converts the floating point data back to PCM, and outputs to the output data line. This isn’t really necessary, but I wanted to see if it was possible to do a bit of processing in Java. It might also be a useful thing to know for future DSP applications such as filtering or FFTs.

When the “Stop” button is pressed, the “running” variable is set to false. The audio data lines are then emptied, closed and stopped with the flush(), stop(), and close() commands.

 

2) Display Panel

The audio data is displayed on a JPanel component. It should be noted that the GUI controls are placed using the Windowbuilder Design View using the Absolute layout mode. This makes it very easy to place the various components, including the display panel. In Java, however, in order to draw on the panel, it must be sub-classed from a standard panel. When this happens, it is no longer regarded as a standard panel, and disappears from the Design View. It is possible to build a custom component, but for a simple “one-of “ GUI this seemed overly complicated. The work-around is to place the GUI controls before writing the code. If you need to move the GUI after writing the code, change the declaration to a standard JPanel, then change it back after re-positioning it.

The display code is shown in the following snippet:

public class dispPanel extends JPanel
{

	private static final long serialVersionUID = 1L;

	public void paintComponent(Graphics g)
	{
		Graphics2D g2;

		super.paintComponent(g);
		g2 = (Graphics2D) g;
		drawScreen(g2);  			
	}
}
private void drawScreen(Graphics2D g2)
{
	BufferedImage dispBuff = new BufferedImage(canvas.getWidth(),
					           canvas.getHeight(),
			                           BufferedImage.TYPE_INT_RGB);
	Graphics2D g2dScreen = dispBuff.createGraphics();
	g2dScreen.drawImage(gridBuff,null,0,0);

	// Find trigger point
	int trig = 0;
	float trigLevel = 0.05f;
	float upperTrigLevel = trigLevel + 0.02f;
	float lowerTrigLevel = trigLevel - 0.005f;

	if (running)
	{
		for (int i = 1; i < SAMPLE_SIZE - 1; i++)
		{
			// If signal crosses the trigger point
			if ((dataBuff[i] >= lowerTrigLevel) &&
			    (dataBuff[i] <= upperTrigLevel))
			{
			        // Check slope
				if ((dataBuff[i-1] < dataBuff[i]) &&
			            (dataBuff[i+1] > dataBuff[i]))				    
				{
				    trig = i;
				}
			}
			if (trig != 0) break;
		}
	}

	g2dScreen.setPaint(Color.green);
	g2dScreen.setStroke(new BasicStroke(2));
	Point2D.Float ptOldPoint = new Point2D.Float();
	Point2D.Float ptNewPoint = new Point2D.Float();

	ptOldPoint.setLocation(0, 125 - (int)(dataBuff[trig] * 255));

	for (int i = 0; i < sampSize; i++)
	{
		ptNewPoint.x = (int)((float)i/samplesPerPix);
		ptNewPoint.y = 125 - (int)(dataBuff[trig+i] * 255);
		g2dScreen.draw(new Line2D.Float(ptOldPoint, ptNewPoint));
		ptOldPoint.setLocation(ptNewPoint);
	}

	g2.drawImage(dispBuff, null, 0, 0);
	g2dScreen.dispose();
}

  As shown the screen is updated via the paintComponent() procedure of the dispPanel class which is called in response to the canvas.repaint() command in captureAudio(). All paintComponent() does is to cast its’ device context to a Graphics2D context, and call the drawScreen() procedure. The drawScreen() procedure uses Buffered Images to update the panel and eliminate flickering. The dispBuff BufferedImage is created to fill the panel, and the previously loaded gridBuff is drawn into it. A trigger point is established as an offset into the floating point data buffer. The data is then scaled in accordance with the time-base and screen size, and plotted on the dispBuff. Finally the Buffered Image dispBuff is drawn onto the display panel.

 

 3) Operation

Operation of the javaScope is straightforward. Simply press the “Start” button and go. The audio source should be connect to the line input of the sound card. The timebase can be adjusted from the drop-down Sweep-Time combo box. The displayed audio can be monitored by selecting the “Loop Audio” check box.

4) Things To Do

As mentioned, javaScope is a very basic oscilloscope program. The original intent was to explore the Java sound API, and Java 2D Graphics API and combine them is a simple GUI. With this in mind, it should be possible to add things like an adjustable trigger level, or a variable amplitude gain. Another facet of the sound API which has not been used is the mixer. Mixer functions are easily available from the Volume Control and Recorder applications, but they can be accessed by sound API and incorporated in the GUI.

5) Running the Program

The program was written in Java using the Eclipse IDE, so it can be run from Eclipse, or it can be run from the executable jar file which along with the source code, can be found here: javaScope.zip