Articles Hierarchy

Articles Home » RPI » RPI try Sound with PYTHON

RPI try Sound with PYTHON

as i started here with sound and PYTHON ( 2.7.3) "IDLE",
installing lib pyaudio was a disaster.
using the aready installed PYGAME lib, i could easy play a WAV file,
but now i tested again: [.load(file)] [.play()] will play '.wav' and '.mid' ..
how that MIDI works on the DEBIAN side i don't know, i only noticed that also timidity is installed already.

but with PYGAME i also try to generate a sound.
next step is to generate sound and play it:F = 440 * 2 ** ((N-49)/12.0) # like from a number 1 .. 88 ( piano keys ) calculate a frequency,
this info i found here and i check in a my public online spread sheet some additional calcs.

tmp = np.zeros((slength,2),dtype=np.int16) # create a ( 2 dim ) stereo numpy array:

for s in range(slength): # and fill it with a sinus using amp and pan ( as panl and panr factor )
_ v = np.sin(s* 2 * np.pi * F / SAMPLERATE)
_ tmp[s][0]= int(panl * v)
_ tmp[s][1]= int(panr * v)

sound = pygame.sndarray.make_sound(tmp) # create a sound from it: # and play it:

this is lots of MATH in a loop, length of the signal array, depending on
- the sample frequency
- the seconds ( time length ) of the signal
- * 2 for stereo
first i try to reduce sample frequency ( from 16000 to 11025 to 8000 Hz )
next i make the signal short in time ( from 1s .. 0.1s ) and play it with loops = -1 continuously
the bits ( resolution ) ( 8, 11, 16, 20 ) have no great influence
next is to improve the numeric by taking as much floating point calc out of the loop, but still there is
v = sin(f*t)
left = int(panl * v)
right = int(panr * v)

Operation: i know little about PYTHON and TK_INTER,
but that i can not use together with PYGAME, so i have to learn/do it again with PYGAME
- make a keyboard layout completely different from that 2 row piano
: white = C,D,E,F,G,A,B sometimes is done on PC keyboard via [q][w][e][r][t][y][u]
: black = C#,D#,F#,G#,A#,B# [??]
but i start with [z] for 10 keys, then [a] row, [q] row, [1] row, and 4 key [F1]..[F4]
is 44 Notes, with CAPS and same keys next 44 Notes, so have full 88 piano keys,
but not in the C,D,E,F,G,A,B sequence, its the C,C#,D,D#,E,F,F#,G,G#,A,A#,B
[CAPS] for upper notes F4 .. C8

and window [x] close
as in some virtual pianos can use the keyboard and draw it in a picture to use the mouse ( click ) too.

missing Tk_inter add i build a PYGAME slider operation for:
amp on keypad [-] [+]
pan on arrow key [left] [right]
selected sample frequency <> mouse only
duration ( 0.01 .. 2 sec) <> mouse only
play ( -1 unfinite, 0 original array length, 1 .. 10 again ) <> mouse only

all key operable by mouse!
add a CAPS LED ( pending issue CAPS keyboard ? VNC?program?)
also i use only the first 800 samples of the array and show them in a stereo OSCI ( option button mouse click)
and also show more info about the played Note

while i think the operation ( via PYGAME tools ) is already usable,
the produced sound is terrible for many Notes.
low frequencies i not hear at all,
higher frequencies seem to have a sample problem ( jump +/- )
but that could be a effect like at low Fs =8000 the highest sound would be 4000 Hz
see Aliasing, Nyquist–Shannon sampling theorem

high frequencies also not hear
Note frequencies what not fit N*sinus in the array sound bad
even more when use play(<>0)
i have to go back to the MATH and the sinus and the fill of a long array
it is actually nonsense to calculate the sinus values for more as one sinus ( or even the first 90 deg / 1/4 of the sinus ). the rest of the array fill could be done by copy.
and the last sinus must be finished and overwrite the array length depending on the frequency for each Note.
but poor numeric, i still do in the copy loop arraylength * 2 floatingpoint calc with PANL and PANR
the tones are much better, the timing not much better, 1 sec from keypress to tone (operate VNC),

but a SYNTHESISER has more to do as generating a sinus,
growing slowly into PYTHON PYGAME and also reading about sound...
i first wanted to make 3 buttons: [ SINUS ][ TRIANGLE ][ SQUARE ]
a complete approach would be something with sliders for 1 .. N harmonics and make inverse FFT.
but i have a new idea for a simplified manipulation of the signal with a graphic HMI, where you would need just one mouse click, lets call it ONE TOUCH!
1/4 sinus ( PI/2 ) has a length "1.0" and height 1.0 , visualize that area:
- maximal as FULL: its a SQUARE signal
- minimal as a short spike on the right at 1.0, ( amp 1.0 )
so i think you can generate any kind of signal ( between that 2 extremes ) by 2 connected lines from (0,0) to (x,y) to (1,1).
operation would be just to select that x,y point with the mouse.
a [RESET] button would bring you back to the sinus curve.
no idea how it sounds, but i like that idea and the challenge to program it in PYGAME.
ok, despite the problem creating waves what are too short for the math,
a sinus ( or triangle ) has 4 sections, with the new x,y changed curve need 8 sections, so below 8 samples per sinus it can not work, coded manually.
it is ok. still there is the problem that odd samples per sinus give rounding errors for the 8 section calc.
now what i get with that tool? i can change the COLOR of the TONE
and that has a interesting influence for low frequencies that as sinus and shorter i can not hear them, more to the SQUARE wave form they are much better audible.
first a look at the operation area:

default play SINUS, mouse over "signal editor" show blue select rectangle and cyan curve ( moves with mouse)

on CLICK (x,y) is used to calculate new signal wave form, pls see in edit window AND the real array data ( sound ) with the OSCI ( show only first 800 samples of the array )
on select [SIN] button go back to sinus calc wave form.
and the tones i can hear now down to the A0, but the highest tones still not hear ( my ears? )
sadly the timing is not much better from RPI keyboard ( versus VNC operation ).
TEST HDMIi tested also with HDMI TV and there first i had no sound.
run Sonic Pi and there press the HDMI butten , close, start python and now it works.
( where to find python code to access that? )
even my tool has a AMP slider ( 0 .. 1.0 ) * 32000 ( for 16 bit ) to numpy array to pygame sound mixer, that is not the volume control of RPI SYSTEM
to have [HDMI][A-Output] select and system VOLume control from here i need OS commands.
first i play with external python command i will build in later: code
then i try a python (2) pygame GUI version code

i read here about the ADSR "We call the pattern of attack, decay, sustain, and release the ADSR envelope shape."
and i also play with it already in Sonic Pi, would be nice to have that too!
again i want use my simplified method, but that means fix A to (5%, 100%), adjust D to (x%), S to (y%), fix R to (95%).

operation of ADSR window

show ADSR and OSCI on a sinus,

and works with signal edit function too

change layout and include SYSTEM Sound output selection and SYSTEM Sound Volume

and make a add Color Theme

now testing about delay between key press and sound,check on mixer.pre_init(...,buffer=xxx)
also rework the play sound:
-a- depending on the "Note (N 1 ..88) " frequency and the Fs Sample rate there is a specific Samples/sinus for that Note.
-b- depending an the Sound duration target there will fit Sinuscounts full sinus in that duration.
+ that NOTELIST with that 2 data is created on change of sample rate and duration, and with that not calculated for play at each Note again.
But there still is the creation of that long array Samples/sinus * Sinuscounts long,
slightly different for each Note (from the duration target ), just to get only full sinus!)
next improvement would be to take out the calc of the first sinus.
in a 2 dimensional array i create a list of sinus ( with max amplitude ) for each Note.
in case of SINUS EDIT ( square / triangle...) ( and on change of sample frequency) it has to be done again.
also i changed from heavy diagnostic printout to print 2 log files ( in RAM DISK ) a setting info file and the note_list file.
so lets take a look at the file: "keyboard_note.txt":
at a Fs of 8000 i see the first problem already at C#5 , D5 , what both have a Samples/sinus of 14, means that will be the same sound... so, problems not only at Fs/2 4000Hz( sample theorem ), already start at 550Hz!! ( or where is my mistake? )
and from D7 up for sinus no sound??

Exporting the sinus data for all Notes to file is possible but too slow, so it has a diagnostic switch, and a other diagnostic switch i make operable by a keyboard [CTRL][d]

now i tested to make the note_list, and the proto_sinus and the 88 sounds first,
and when you play, just play the (existing) sound: now i have a good keyboard response.
but the make ALL note loop doing all this work, needs 105 sec! and must be called again after any tuning change, even pan and amp..
( but not system VOL and Source and play(..) count adjustment )
there is only one way to use that:
the program starts in the EDIT MODE, with the fast edit and the 1 sec delay when you play a Note,
and there is a new button [PLAY] and for switch back [EDIT]
in PLAY MODE it first generate the 88 Sounds using your last edit adjustments ( needs that long waiting!)
but then can play via keyboard or mouse without that 1 sec delay.

to test the real speed of that PLAY mode i start with a [play FILE] button what runs a imported array from the file "song.csv" ( same directory!)
what contains lines with: key,play,wait,xxx like "44,9,1000,E"
means key 44 == Note E, 9+1 times the duration, wait 1000msec to play the next Note ( from the beginning, not the end of this Note! (tempo) ).
Starting with the creation of the file i could play 5 Notes in a second..
( keyboard and mouse animation / operation could never do that )
looks like speed is no problem any more. ( after you took the long wait for the sounds generation batch )
AND now even can play chords, without using more mixer channels...
that is controlled by the "0" wait in the play file i try like:
but works only in PLAY MODE, in edit mode that file play timer has to be disabled anyway.
still can press that button for test the file.
if you just loaded that python tool (>= it will fetch a example song.csv file from my site if it not finds one in your work directory.

using hours / days and play with .mid and .wav files, and about tool / spec. online creation / conversion tools
i finally come across a midi file utility for python
MIDIUtilsudo apt-get update
sudo apt-get install python-midiutil

now what i did first was a loop to create a small midi file for each piano note.
next i try same from main program ( to select it in a kind of PULL DOWN MENU ( the HDMI... selection tool was a good start for this GUI )
When the 88 Midi files are created in RAMDISK you are in PLAY MODE
but as the midi is just send via PYGAME to RASPBIAN there is not much to adjust,
( only sys vol and sys source ). Still there is the button for play the song.csv file,
but like in the edit mode the timing from the Notes can not be used with the Midi Note files ( possibly later version )

as i found the trick to select MIDI instruments in the MIDI files:
now have channel 0 PIANO, channel 9 DRUMS, channel 0 Instrument 1 ..128 (-1) : ( even not all give a sound?? )

now the Instrument SELECTION INPUT NUMBER is in the PYGAME window.
i use a import file from same directory i found here
not need to use the IDLE command line input anymore

and the download is a ZIP file what also includes the soundselect tool and all .sh .desktop you need for above operation

i was still looking for a way to replace the ugly manual 1 .. 88 sounds in
makeallsounds and playsounds
and now could do it with a array. ( from the python books i did not get that it works like a pointer to objects...
mySounds=[] # reset for append
for k in range(1,89) :
_ mySounds.append(pygame.sndarray.make_sound(mytmp(k)) ) # point 0 .. 87
_ sys.stdout.write('.')

the 1.5 min sounds generation, as bad as it is, now has a progress slider
this is V4.9.4
! but when i test to use MIDI files and load them into sounds and load that in that array,
the result was a ugly "gruck" sound same for all Notes.
Other test about the duration of MIDI file Notes show you can play faster with duration=0.5
instead 1.0 [s] but sound gets bad until there is no sound at all. Anyway after the files are generated you can not change anything, just play them / send them to OS. I still look for the way to send a MIDI event for a Note from PYTHON to OS and that play it??


i put this in HOLD and go back to try some more RPI sound programs here

back from here where i try to think about involving a arduino as a semi MIDI device / means arduino send normal USB and python catch it and emulates a virtual MIDI device what can be connected to software synth.
-a- frist step is a seperat python test according this.
-a1- copy the file
-a2- copy add the /data/midikeys.png
-a3- i started IDLE ( python 2.. ) and try to run but got error about __file__
so i changed that code to find that midikeys.png
i think i read that default is to start in output mode, it did not. But i also not know how to use IDLE and say RUN with parameter like "--output" ( -o )
so i just not use IDLE, start from terminal with
cd /home/pi/python_audio/
python -o
thats how i called it
and see the nice keyboard ( and with the keydesign "q ..\" and for # "!...+" )
-a4- in a new terminal window start the FLUIDSYNTH
aconnect -i -o
aconnect 129:0 14:0 and i hear the sound!

- i see a very nice code and understand it after testing, the keys have 3 regions for mouse press, what make 3 velocities (sound volume: 42,84,127 ) ( but can not work for keyboard )
- he work with key down and up, for keyboard and mouse button, making MIDI Note ON / OFF depending on that timing, gives very natural sound!!
! this guy is a genius: Lenard Lindstrom !
by the way, i checked: The programs in the "examples" subdirectory are in the public domain.

now use it and strip it down to a virtual MIDI device with USB input, no keyboard.
- means use OUTPUT mode only / delete these options -o -i -l /
but i added the command line parameter for instrument ( -i yy )
A Arduino ( i used a micro pro, as arduino ide linux not know it, i treated it as a Leonardo just loose some LED function ) send via USB pseudo MIDI to "python pygame midi serial converter"
and that connects to MIDI server fluidsynth..

still in preparations for the new version of my keyboard i work on a other tool,
from python pygame i see the need to execute system commands,
( in the HDMI / volume control we have used the subprocess command already)
but i mean a small ready program to execute and show a command AND show the system answer.
sounds easy but as usual its tricky, i needed to make a multiline window by pygame texts
to show the system answer inclusive catch / execute & erase / newline in the string.

syscall2 allows command line parameter -h -i 'like ls - la' -e
where the command should be in ' ' and -e will execute it directly, without it will wait for your start click. This looks ready to be spawned by a master program.
and code