You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to attempt to adapt the Bonsai workflow that is used in the International Brain Lab behavioral protocol to play audio tones rather than present visual stimuli. As a disclaimer, the workflow is quite dense with many nested workflows and I'm still learning the ins and outs of working with Bonsai. Any help would be greatly appreciated!
The workflow uses input from a bpod to initially present a circular grating at an X location on an iPad screen. Then with input from the rotary encoder, a closed loop nested workflow translates the X location of the stimulus across the screen until a StopCondition is met which triggers the next state for the bpod. The stimulus relies on the BonVision environment and uses parameters (Position, Contrast, etc.) defined in a python script. I would like to try and adapt this workflow to initially present an auditory tone through 1 of 2 speakers and then used the rotary encoder input to modulate the amplitude of the tone between the speakers until the StopCondition is met. I think I have identified the most relevant nested workflows that will need to be replaced. I'm wondering if this sounds like a reasonable plan?
This is the main top level of the current Gabor2D.bonsai. The Stim node contains a nested Workflow.
The InitializeInputs and Environment Definition initialize the visual stimulus parameters. I would like to change these nested workflows to initialize the audio stimulus instead. The UpdateInitPosition and ClosedLoopStimulus nodes manage the new StimLocationX and the lateral translation of the stimulus based on rotary encoder input. I would like to change these nested workflows so an audio tone plays in 1 speaker and the rotary encoder input modulates the amplitude of the tone across speakers.
I have been able to test the USB speakers I would like to use for the tone with a simple workflow. I don't think this is the right approach but by changing the Position value of the CreateSource operator, the tone plays from either the left or right speaker. However this seems to be an all or nothing approach, the tone is either in the left speaker with Position set to -1.0.0 or right speaker with 1.0.0. I would like the amplitude of the tone to equalize between the 2 speakers once the input reaches one StopCondition or the amplitude to be reduced to 0 from one speaker when the the other StopConditions are met.
I know this is quite an extensive project, my feeling is that would be best to try and adapt this existing workflow rather than starting from scratch. Any help or advice would be greatly appreciated! Below is the current as it stands for the visual stimulus. Please let me know what other information may help with your guidance. Many thanks!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
I would like to attempt to adapt the Bonsai workflow that is used in the International Brain Lab behavioral protocol to play audio tones rather than present visual stimuli. As a disclaimer, the workflow is quite dense with many nested workflows and I'm still learning the ins and outs of working with Bonsai. Any help would be greatly appreciated!
The workflow uses input from a bpod to initially present a circular grating at an X location on an iPad screen. Then with input from the rotary encoder, a closed loop nested workflow translates the X location of the stimulus across the screen until a StopCondition is met which triggers the next state for the bpod. The stimulus relies on the BonVision environment and uses parameters (Position, Contrast, etc.) defined in a python script. I would like to try and adapt this workflow to initially present an auditory tone through 1 of 2 speakers and then used the rotary encoder input to modulate the amplitude of the tone between the speakers until the StopCondition is met. I think I have identified the most relevant nested workflows that will need to be replaced. I'm wondering if this sounds like a reasonable plan?
This is the main top level of the current Gabor2D.bonsai. The Stim node contains a nested Workflow.
The InitializeInputs and Environment Definition initialize the visual stimulus parameters. I would like to change these nested workflows to initialize the audio stimulus instead. The UpdateInitPosition and ClosedLoopStimulus nodes manage the new StimLocationX and the lateral translation of the stimulus based on rotary encoder input. I would like to change these nested workflows so an audio tone plays in 1 speaker and the rotary encoder input modulates the amplitude of the tone across speakers.
I have been able to test the USB speakers I would like to use for the tone with a simple workflow. I don't think this is the right approach but by changing the Position value of the CreateSource operator, the tone plays from either the left or right speaker. However this seems to be an all or nothing approach, the tone is either in the left speaker with Position set to -1.0.0 or right speaker with 1.0.0. I would like the amplitude of the tone to equalize between the 2 speakers once the input reaches one StopCondition or the amplitude to be reduced to 0 from one speaker when the the other StopConditions are met.
I know this is quite an extensive project, my feeling is that would be best to try and adapt this existing workflow rather than starting from scratch. Any help or advice would be greatly appreciated! Below is the current as it stands for the visual stimulus. Please let me know what other information may help with your guidance. Many thanks!
Gabor2D.zip
Beta Was this translation helpful? Give feedback.
All reactions