Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with Google

Become a Subscriber!

Subscribe to our Patreon, and get image uploads with no ads on the site!

Read more...

NeuralDSP / Darkglass helix type thing....? Now with Rabea demo!

What's Hot
1246710

Comments

  • Reading through the stuff...they've thought of a lot of stuff.
    Read my guitar/gear blog at medium.com/redchairriffs

    View my feedback at www.thefretboard.co.uk/discussion/comment/1201922
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • monquixotemonquixote Frets: 17869
    tFB Trader
    lysander said:
    I don’t think they would be able to let the users make their own profiles / models because neural networks need a lot of compute power and a lot of data to be fitted, plus almost certainly some expertise to tweak things till they work.

     The advantage of profiling an amp is that you've got access to an unlimited test set so in that sense it's not a problem.

    I could see it potentially needing a bit of grunt to train, but it depends how complex the network is and it might not be all that fancy. 

    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • lysander said:
    I don’t think they would be able to let the users make their own profiles / models because neural networks need a lot of compute power and a lot of data to be fitted, plus almost certainly some expertise to tweak things till they work.
    I've read this elsewhere. I don't really understand it. I can download Tensorflow with Python and use PyInstaller to make an Android app with a trained up neural network inside. Why couldn't you do that on an ARM or SHARC chip?

    Bye!

    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • lysander said:
    I don’t think they would be able to let the users make their own profiles / models because neural networks need a lot of compute power and a lot of data to be fitted, plus almost certainly some expertise to tweak things till they work.
    I've read this elsewhere. I don't really understand it. I can download Tensorflow with Python and use PyInstaller to make an Android app with a trained up neural network inside. Why couldn't you do that on an ARM or SHARC chip?
    I'd imagine that the SHARC chip would be most useful there - it's almost analogous to GPU acceleration in a PC.

    And yeah...it's reasonably likely that this isn't the most complex of neural networks, and it comes pre-trained to do one job. Not only that, but it doesn't have to do anything in realtime in terms of the profiling. I doubt performance is going to be a problem there.

    There's one feature which nobody seems to have tagged as important - this thing uses wifi to connect to other devices, not Bluetooth. I can't put my finger on exactly why, but this pleases me no end.
    <space for hire>
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • lysander said:
    I don’t think they would be able to let the users make their own profiles / models because neural networks need a lot of compute power and a lot of data to be fitted, plus almost certainly some expertise to tweak things till they work.
    I've read this elsewhere. I don't really understand it. I can download Tensorflow with Python and use PyInstaller to make an Android app with a trained up neural network inside. Why couldn't you do that on an ARM or SHARC chip?
    I'd imagine that the SHARC chip would be most useful there - it's almost analogous to GPU acceleration in a PC.

    And yeah...it's reasonably likely that this isn't the most complex of neural networks, and it comes pre-trained to do one job. Not only that, but it doesn't have to do anything in realtime in terms of the profiling. I doubt performance is going to be a problem there.

    There's one feature which nobody seems to have tagged as important - this thing uses wifi to connect to other devices, not Bluetooth. I can't put my finger on exactly why, but this pleases me no end.
    Because bluetooth is shit and should be put in the bin??

    Bye!

    0reaction image LOL 0reaction image Wow! 1reaction image Wisdom
  • lysanderlysander Frets: 574
    SHARC is a DSP chip and is about as far from a GPU’s architecture as chips can go. 
    While it may do a very good job at running a trained network, I don’t think it would be very good at all at running the type of back propagation algo that is needed for training.
    I’d be happy to be proven wrong if someone has a paper or similar that shows a performant implementation.
    Anyway there clearly is a technical
    issue of some sort if they’re not
    offering this to users given that it’s a big disadvantage over their competition. I’d be interested to hear more on this from them.

    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • lysander said:
    SHARC is a DSP chip and is about as far from a GPU’s architecture as chips can go. 
    While it may do a very good job at running a trained network, I don’t think it would be very good at all at running the type of back propagation algo that is needed for training.
    I’d be happy to be proven wrong if someone has a paper or similar that shows a performant implementation.
    Anyway there clearly is a technical
    issue of some sort if they’re not
    offering this to users given that it’s a big disadvantage over their competition. I’d be interested to hear more on this from them.

    But once you've got the model fully trained up, you just deploy the 'prediction' version of the model. No back propagation needed ????

    Bye!

    0reaction image LOL 0reaction image Wow! 1reaction image Wisdom
  • lysanderlysander Frets: 574
    edited January 2020
    I was talking about ‘profiling’ of new hardware / amps by users though.
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • lysander said:
    SHARC is a DSP chip and is about as far from a GPU’s architecture as chips can go. 
    While it may do a very good job at running a trained network, I don’t think it would be very good at all at running the type of back propagation algo that is needed for training.
    I’d be happy to be proven wrong if someone has a paper or similar that shows a performant implementation.
    Anyway there clearly is a technical
    issue of some sort if they’re not
    offering this to users given that it’s a big disadvantage over their competition. I’d be interested to hear more on this from them.

    Yes, it's very different in architecture, but - as far as I know - the SHARC's application domain falls within that of a GPU's capabilities, which is all I was referring to.

    And, as pointed out by Drew, we're talking about a trained model here. It'd be absolute lunacy to release a product like this with an untrained model, because it'd likely make the product behaviour relatively unpredictable over time (given that the manufacturer no longer has control over the input datasets).
    <space for hire>
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • lysander said:
    I was talking about ‘profiling’ of new hardware / amps by users though.
    Yes, but the creation of a profile doesn't involve training a machine-learning model, it involves applying it.
    <space for hire>
    0reaction image LOL 0reaction image Wow! 1reaction image Wisdom
  • lysanderlysander Frets: 574
    I disagree, a neural network is function approximator, that has to be calibrated to the function it is trying to approximate. 
    Given even the number of controls differs between different amps, it is very unlikely that there is a single trained network that would somehow work for every amp without need for retraining, I can’t think of a single comparable example in other fields.
    Much more likely is that each amp model is a separate trained network, and a new amp means a new training.

    And no, the SHARC application domain is completely different from a GPU.
    DSP processors are primarily designed for low latency flow processing with little no parallelism and where the primary design constraint is real time operation.
    GPUs are designed for extremely high parallelism of fairly primitive computation units, with very little consideration for latency.
    The application domain is at the opposite end of the spectrum, there’s a reason no one does real time audio on GPU and people don’t use DSP chips for machine learning.
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • JetfireJetfire Frets: 1702
    edited January 2020
    Literally this is some top level nerd discussion going on here.
    5reaction image LOL 0reaction image Wow! 5reaction image Wisdom
  • fretmeisterfretmeister Frets: 24829
    Jetfire said:
    Literally this is some top level nerd discussion going on here.
    I’m going to laugh my arse off if they are both wrong.

    I’m so bored I might as well be listening to Pink Floyd


    3reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • EricTheWearyEricTheWeary Frets: 16353
    Jetfire said:
    Literally this is some top level nerd discussion going on here.
    This kind of Discussion is one of the reasons these kind of units pass me by. I want to know if it goes nah nah nahhh better than the last box went nah nah nahh and not have to complete a degree in programming first. 
    Tipton is a small fishing village in the borough of Sandwell. 
    0reaction image LOL 0reaction image Wow! 4reaction image Wisdom
  • John_PJohn_P Frets: 2756
     EricTheWeary said:
    Jetfire said:
    Literally this is some top level nerd discussion going on here.
    This kind of Discussion is one of the reasons these kind of units pass me by. I want to know if it goes nah nah nahhh better than the last box went nah nah nahh and not have to complete a degree in programming first. 

    Indeed.   Does it chug and sing,  Will it arrive before the fm3 - yes to those and it will sell.   
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • monquixotemonquixote Frets: 17869
    tFB Trader
    lysander said:
    I disagree, a neural network is function approximator, that has to be calibrated to the function it is trying to approximate. 
    Given even the number of controls differs between different amps, it is very unlikely that there is a single trained network that would somehow work for every amp without need for retraining, I can’t think of a single comparable example in other fields.
    Much more likely is that each amp model is a separate trained network, and a new amp means a new training.

    And no, the SHARC application domain is completely different from a GPU.
    DSP processors are primarily designed for low latency flow processing with little no parallelism and where the primary design constraint is real time operation.
    GPUs are designed for extremely high parallelism of fairly primitive computation units, with very little consideration for latency.
    The application domain is at the opposite end of the spectrum, there’s a reason no one does real time audio on GPU and people don’t use DSP chips for machine learning.

    I think I read the unit has an ARM processor which could be used for training.

    It doesn't matter if it takes a while to run.

    If it's that big of a problem it could generate a training set and upload it to a PC or the cloud.
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • I think I read the unit has an ARM processor which could be used for training. 
    It seems a bit woolly on that - and I didn't see an ARM chip in the gutshots. However, I believe at least one of those SHARC chips has an embedded ARM core, so it's possible that's what they're referring to.
    <space for hire>
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • octatonicoctatonic Frets: 33928
    Does anyone know how to compare the amount of processing of the Keystone DSP in the Axe FX III to the 4x Sharc's in the Quad Cortex?

    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • monquixotemonquixote Frets: 17869
    tFB Trader
    octatonic said:
    Does anyone know how to compare the amount of processing of the Keystone DSP in the Axe FX III to the 4x Sharc's in the Quad Cortex?


    Presumably there is a manufacturers spec sheet kicking about somewhere.

    Raw grunt is probably less important than efficient algorithms though.
    0reaction image LOL 0reaction image Wow! 0reaction image Wisdom
  • Jetfire said:
    Literally this is some top level nerd discussion going on here.
    This kind of Discussion is one of the reasons these kind of units pass me by. I want to know if it goes nah nah nahhh better than the last box went nah nah nahh and not have to complete a degree in programming first. 
    Why the hell would you associate a tangential discussion between a bunch of dev enthusiasts with the product itself? That's batty.

    Bye!

    0reaction image LOL 0reaction image Wow! 3reaction image Wisdom
Sign In or Register to comment.