T.R | Title | User | Personal Name | Date | Lines |
---|
453.1 | should work | DSSDEV::SAUTER | John Sauter | Mon Aug 04 1986 09:02 | 19 |
| I've never done anything with the Atari, but I had no trouble getting
my Apple II to accept a voice dump from a DX7. All you have to
do is allocate a 5000-byte buffer. If my arithmetic is correct
you've got about 30 microseconds between bytes, which means about
20 to 25 simple instructions on a slow chip like the Apple's 6502.
That's plenty of time to call a subroutine which waits for a byte
from the interface, store the byte in your buffer, update the buffer
pointer, check for an F7 hex to terminate the string, and loop back.
I was using the Roland MPU-401 but I suspect a dumb serial interface
would not be any different.
On a multitasking system you would take an interrupt for each byte
rather than looping waiting for the interface. However, a multitasking
system would include a CPU which let you do more in 30 microseconds,
so I would still expect you to win. If the interrupt-level code
has a 5000-byte buffer which it can fill without needing service
from non-interrupt level then I would expect DX7 voice dumps to
work without any trouble.
John Sauter
|
453.2 | | DB::RAVAN | | Mon Aug 04 1986 12:27 | 25 |
| Steph,
Does the ST have a problem with a long stream of MIDI? That is,
are you asking this question as a result of personal experience?
As John has pointed out, I'd be surprised if there is a problem.
But then, I have an Amiga, not an Atari, so I *don't* speak from
any personal experience at all.
I am in the middle of writing the lowest level interrupt service
routine for a MIDI port on the Amiga. I can say that I learned
very early on that I wouldn't be able to view the MIDI stream through
all the normal software layers that AmigaDos imposes. There was
simply too much in the way and I needed to time the incoming data
stream in an intelligent way. So I'm writing a custom crafted
ISR specifically for the purpose (and have been for some time now,
however... :-)).
So what I can say is that, if you are trying to use a high level
language to write some (for example) sequencing software, and plan
on using the existing software between you and the hardware, good
luck. 31.25K is fast. For example, at 8MHz assumimg 10 clocks
per instruction, you have time for about 256 instructions between bytes.
Any path longer than that and you'll loose data.
-jim
|
453.3 | | DB::RAVAN | | Mon Aug 04 1986 22:07 | 16 |
| RE: -.1
One minor point: My timing analysis assumes that the ST serial port
is implemented the same way that the Amiga serial port is implemented.
The Amiga serial port is simply a parallel word backed up by a serial
shift register. The parallel word can be read while the serial
shift register accumulates the next data byte. The parallel data
is stored as a word so that you can read status bits in the same
instruction that gives you the data byte.
If the ST has some sort of FIFO buffer to hold unread bytes, my
analysis is incorrect. In that case, you'd have time to execute
approximately 256*n instructions before you began to loose data,
where 'n' is the depth of the FIFO buffer.
-jim
|
453.4 | Revector all the interrupts | MAY19::BAILEY | Stephen Bailey | Tue Aug 05 1986 18:17 | 56 |
| As I mentioned (I think). I've actually never tried this, since
I am trying to DECIDE whether to get an ST (vs Amiga vs Wait), but
I asked the person who posted the note in the ST notes file (see bottom
of this note.) and he said he was actually loosing data, even with
a tight loop picking off bytes as they arrive.
I can't imagine that his tight loop is generating 256 instructions,
so it looks to me like the problem is the OS's interrupt service
routines, since it loses data after about 200 bytes (~.6 seconds
using John's numbers). Revector them so the clock loses time and
the disk loses data? (I'll do ANYTHING to be able to edit DX7 voices with
something other than that one silly slider! ;^) )
Steph.
--------------
From: KELVIN::HALLOWELL 5-AUG-1986 16:28
To: MAY19::BAILEY
Subj:
Steph,
I haven't yet resolved the dropped data problem. To define it better to you,
let me say....
I was living with whatever Lattice C had compiled for me in a loop like
while(Bconstat(MIDI) != 0) /* char ready @ MIDI port */
{
voice_buff[ctr++] = Bconin(MIDI);
}
I tested the timing sensitivity by
a) capturing the same dump on my PC clone/Roland MPU with virtually
the same code in that environment --obviously no problem.
b) writing files a character at a time from the PC to the ST
over the MIDI line, with programmable delay time padding the
space between characters. With 0 delay you start seeing data
lost after about 200 chars, longer delays extend the 'crapout' limit
to the point that you can send long files with no loss.
What I have done is temporarily sidestep the problem, spending time on some
voice editing code for my Korg DW8000, which has an inherently friendlier
voice dump. I'm getting Mark Williams C and planning to try it's code and then
optimize it or bypass it with a faster ML driver in a few weeks.
All other things about the ST are excellent as far as I'm concerned, and I
think that this little glitch will be pretty soluble with a little more effort.
Let me know if you end up working in this area, we could possibly share some
code and other info.
Dave Hallowell
------------
|
453.5 | 300 �sec; use assembly | DSSDEV::SAUTER | John Sauter | Wed Aug 06 1986 08:35 | 23 |
| re: .1--As hinted in a later response there is a problem with my
arithmetic in 453.1. I forgot that 32.25 KB per second is a *bit*
rate, not a *byte* rate! You actually have about 300 microseconds
per byte, not 30.
I don't know how the Atari ST hardware and software is arranged
internally, but if its developers gave any thought to real-time
programming they would have arranged for the clock routine to be
able to be interrupted by receiving a byte from the serial port.
There should be some specification somewhere of the maximum time
that they keep interrupts disabled. If there isn't you may be able
to deduce it with a logic analyzer on the system bus, looking at
the interval starting with the serial interface asking for an input
interrupt and ending with the character being accepted by the CPU.
I would be concerned about using Lattice C for this kind of
application. If the operating system allows, code a device driver
in carefully optimized assembly code. Even if the operating system
won't let you code your own device driver, using assembly instead
of C for this loop might save you the time you need. For all I
know the C compiler could be generating calls to its run-time system
within your loop.
John Sauter
|