It's pretty obvious that Verilog and VHDL, modeled after C and Ada respectively, both imperative languages, follow a drastically mismatched paradigm for hardware design, where circuits are combined and "everything happens in parallel". It becomes even more obvious when you have tried a functional alternative, for example Clash (which is essentially a Haskell subset that compiles to Verilog/VHDL: https://clash-lang.org).
The problem is, it is hard, if not downright impossible, to get the industry to change. I have heard many times, in close to literally these words: "Why would I use any language that is not the industry standard". And that's a valid point given the current world. But even for people that are interested, it might just be hard to switch to something like Clash and not give up pretty quickly.
Unlike imperative languages, functional languages with a rich modern type system like Haskell are hard to wrap your head around. It's no news that Haskell can be very hard to get into for even experienced software engineers. In 2005, after already having more than a decade of programming experience in C, C++, Java, various Assemblers, python (obviously not all of these for the same time) and many other languages, I thought any new language would mostly be "picking up new syntax" at that point. Yet Haskell proved me very wrong on that, so much that it was almost like re-learning programming. The reward is immense, but you have to really want to learn it.
And to my surprise at the time, when I got heavily into FPGAs, the advantage proved to be even stronger when building sequential logic, because that paradigm just fits so much better. My Clash code is much smaller, but also much more readable and easier to understand than Verilog/VHDL code. And it's made up of reusable components, e.g. my AXI4 interfacing is not bespoke individual lines interspersed throughout the entire rest of the code. That's mainly because functional languages allow for abstraction that Verilog/VHDL don't, where often the only recourse is very awkward "generated" code (so much so that there is an actual "generate" statement that is an important part of Verilog, for example).
So by now, I have fully switched to using Clash for my projects, and only use Verilog and VHDL for simple glue logic (where the logic is trivial and the extra compilation step in the Verilog/VHDL-centric IDE would be awkward) or for modifying existing logic. But try to get Hardware Engineers who probably don't have any interest in learning a functional programming language to approach such an entirely different paradigm with an open mind. I've gotten so many bogus replies that just show that the engineer has no idea what higher order functional programming with advanced type system is on any level, and I don't blame them, but this makes discussions extremely tiring.
So that basically leaves the intersection of people that are both enthusiastic software engineers with an affection for e.g. Haskell, and also enthusiastic in building hardware. But outside of my own projects, it just leaves me longing for the world that could exist.
Logged in just to upvote this and largely agree with you. Verilog/VHDL are stuck at the 1980s coding paradigm level, as if the industry grabbed the first working solution for automated hardware developement and has clung on to it.
> The problem is, it is hard, if not downright impossible, to get the industry to change.
Yes. I don't think it will until either, say, Intel does it by CEO fiat, like the Amazon memo, or a startup from outside somehow dominates the industry by using a different technology.
(I have worked both sides of this, a chip design startup that was bought by Cadence, and a medium size fabless semi company)
> I've gotten so many bogus replies that just show that the engineer has no idea what higher order functional programming with advanced type system is on any level, and I don't blame them, but this makes discussions extremely tiring.
"A monad is just a monoid in the category of endofunctors, what's the problem?"
(I'm joking, but this is an us problem and not a them problem, you can't evangelize things to people that they don't understand, and you have to reach them where they are. Yes, this is very hard work)
However one thing that us software types may not appreciate is that all the weird imperative stuff in Verilog that isn't synthesizable probably gets used in more lines of code than the synthesizable subset - because testbenches are absolutely critical to shipping hardware.
The hardware industry can't benefit from rapid iteration because every iteration costs a mask set.
> The hardware industry can't benefit from rapid iteration because every iteration costs a mask set.
Before tapeout there are steps which could benefit I suppose. But as a functional-programming n00b I don't see how that could be the solution. I would argue that improving place and route algorithms to the point that deploying to a (large) FPGA is almost as quick as compiling software would be a huge step forward.
One thing I don't quite understand about using Haskell for circuit design is that at a first glance it also seems like pure functional programming has an impedance mismatch with what the circuitry physically does. (For reference, I've programmed in Haskell extensively before.)
For example, much of Haskell directly or indirectly relies on recursion -- but this nearly nonsense when talking about silicon! There's no call stack, for one! More importantly, any algorithm requiring repeated operations like this would be inherently inefficient and undesirable in a design space where latency matters.
I have a feeling that one reason some of these alternatives haven't "taken off" and swept away the legacy languages is because they're not really that ideal either. [1]
Perhaps there's an ideal "concurrent data processing" programming paradigm waiting to be discovered that is neither like procedural languages nor pure functional languages.
[1] This is purely my uninformed layman perspective, of course. I'd love to be corrected by people who've worked in the field.
Why do you feel there is a mismatch? Any digital circuit can be modeled with a pure function that transforms an input stream of values to an output stream of values. And that is actually precisely what Clash does.
Regarding recursion. I believe Clash does support a limited form of recursion. Namely if you prove every recursive call leads to the problem size strictly decreasing. e.g. recursion on a vector of size n must mean every recursive call is applied to a vector of smaller size.
How does (non-tail) recursion work there. That inherently requires a stack, unless you limit the recursion depth and give each recursion step its own circuit.
I could imagine limiting yourself to a specific subset of haskell keeps things under control, but the mis-match seems obvious. At least from the pov "just write haskell but compile to an fpga instead of a binary".
Sorry, I edited my comment and added a part about recursion.
So you are right that not all of Haskell is compileable using Clash. But that also isn't the goal. You are using Haskell to describe your digital circuit. That generally is very different from writing a normal Haskell program. It just so happens that Haskell is good at both.
Can this be because in digital circuits state (memory) plays an important role, while its treatment in a (pure) functional language is not straightforward?
It's straightforward to implement memory in a pure function. For example if you say a digital circuit is a function from infinite list of inputs to an infinite list of outputs. You can create a "register" by simply adding a value to the start of the list. For example in Haskell:
delayCircuit xs = 0:xs
take 10 (delayCircuit [1..])
-- [0,1,2,3,4,5,6,7,8,9]
Whether a function is pure doesn't mean it can't have internal state. As another example it's perfectly fine to have a completely pure function that has internal state:
withInternalState a = let go = get >>= \s -> put (s + 1)
in fst (runState go a)
What is important for purity is that this internal state doesn't leak outside the function. Which it doesn't for circuits. Outside of things like cosmic rays, heating etc a circuit is completely pure in reality.
> Whether a function is pure doesn't mean it can't have internal state.
This seems to be in direct contradiction to what is written on Wikipedia[1], specifically that "the function return values are identical for identical arguments".
If it has non-trivial internal state, how can it return identical values for identical inputs?
Sadly your example eludes me as I don't know any Haskell so it reads like line noise.
It does have identical return values for identical arguments. That doesn't mean the function can't have internal state. Maybe this is more clear in pseudo code:
That function example is not allowed in Haskell. It's pure functional throughout, not just at "function boundaries".
I agree that conceptually a language can be made where only function boundaries are required to be side-effect free, and internally "anything goes" as long as it doesn't pollute the outside world. This might be a good model for circuit design, especially if using something like an IO monad to store persistent state across clocks, such as flip-flops or registers.
However, this is not what Haskell does, at all. There are no mutation functions like "fill(x)". You have to create the buffer filled with x right from the beginning.
More importantly, in Haskell that buffer is defined with a recursion, so it looks something like: (x,(x,(x)))
Effectively it is an immutable linked list. Any modification involves either rebuilding the whole thing from scratch (recursively!) or making some highly restricted changes such as dropping the prefix and attaching a new one, such as (y,(x,(x))).
Haskell is not LISP, F#, or Clojure. It's pure and lazy, which is very rare in functional programming. It's an entirely different beast, and I don't see how any variant of it is a good fit for circuit design, which is inherently highly stateful and in-place-mutable.
Such a function is perfectly allowed and even encouraged, see here where I allocate a mutable array with undefined values and write and read from it in a pure function. Basically replicating the pseudo imperative code I have written before:
Monadic computations are very much in the spirit of Haskell. I don't think the Haskell community view the ST monad as a hack. It's one of many tools in the box of any Haskell programmer.
What I was thinking of was more like a linear shift feedback register. It has zero inputs and outputs random bits, thanks to its non-trivial internal state.
Or as we were saying, a register. Could for example be a function which takes a read/write flag and a value, writes the value to the internal state if write flag is set, and returns the internal value.
edit: In my softcore I have a function to read/write registers, it modifies global state rather than internal state so would not be pure either.
edit2: I guess my point is, for me "state" is something that is non-trivial, and retained between invocations. Your example is trivial and not retained (it's completely overwritten always).
You are seeing a single invocation as running your circuit for a single clock cycle after an arbitrary amount of clock cycles. That's the wrong approach. A single invocation of a circuit takes a stream of values and produces a stream of values. Every invocation start from clock cycle 0.
So yes if your circuit depends on external signals driven by registers living in a different circuit you need to pass those in as inputs. Essentially circuits are composeable just like functions.
A linear feedback shift registers always produces the same stream of values no matter how many times you run it. It's completely pure.
Still, I would be very interested to see how one would implement a shift register, lets say something like the 74HC165, so I can see how the pure functions interact with the register state.
Runnable using replit. To make it as simple as possible I didn't use Clash and used types available in prelude Haskell (Bool instead of bit, 64 bit integer as register state etc. Every single element in the list represent a value coming out of the register on the rising edge of a clock cycle. You can easily extend it if you want latches, enables etc. But that doesn't really change the core point of the code.
Much appreciated. I think I get the gist of it, even though it looks very complex compared to the Verilog counterpart. To be fair, my lack of Haskell knowledge doesn't help.
But it's really helpful to get a feel for the different approach.
I mean Haskell is definitely and acquired taste if you haven't used other languages in the same style before. Also note that writing this in actual Clash would be a one liner. Because stuff like registers and shifts are available as part of the standard library.
I've tried to get into Haskell, and while I don't have a huge problem with the overall concepts most of the time, although a bit alien at times, my brain just can't seem to handle the syntax.
I've found myself thinking differently about code and using a lot more functional-ish concepts when writing my "normal" code though, so I do like the exposure.
So your point is you can have temporary variables in pure functions. That's fine.
However how do you implement registers? Do you have to pass around the "register file"? And if so, how does that work?
Like, how would a parallel-to-serial shift register look like? Ie an asynchronous latch updates the internal shift register and an independent clock shifts out the values.
Why not something declarative, like Prolog (bonus: sounds like it could be the advanced version of Verilog) or HCL (Hashicorp Configuration Language, as used for Terraform)?
I've been thinking of writing a tool to use the latter for EDA, so you could build libraries of composable reusable blocks, take application note type examples directly rather than re-draw, etc. and (what motivated me initially) store it all in git. FPGAs seem at least as good a fit.
This has been tested several times, and used be a fairly common EDA academic reserach subjest. (As well as other languages like Haskell, SML). I first saw Prolog used for HW design in 1995 or so. A bit newer example is this:
https://www.researchgate.net/publication/220760154_A_Prolog-...
One important area is digital systems testing. There stuff like Prolog seems to me a better fit than for actual HW description. For test cases I want to efficiently describe the behaviour, esp inputs, expected outputs and changes to state.
For the design, I'm not only interested in that I get the correct behaviour (functionally correct), but also _how_ I get the behaviour. How much resources are needed, what does the routing look like, how many metal layers does it to route, how much power will it consume, how fast can I clock the design.
I had a similar experience with HardCaml back in 2015/2016. The benefits where overwhelming: higher productivity, higher reusability, less intractable bugs, tighter TTM, and no more slippage in deliverables timeline. The performance where comparable to Verilog-only project (density, path length). In the end, the effort was shut down by management because the approach was “too complicated”.
And commonly also because it's, currently at least, very hard to find people for it. Which is an entirely valid concern, but is still so frustrating. Because while C for system programming might not be ideal, it still makes sense, whereas Verilog (that was meant to look like C because people knew C for programming already) for hardware design is just so much more of a mismatch.
At least for me, OOP was the obvious better model for modelling HW than either imperative and functional based paradigm. Modules, cores in HW have an internal state and defines interfaces. The internal state is updated due both to the internal state itself and changes to inputs.
SystemVerilog fixed several pain points in Verilog, one of the more important the ability to encapsulate, bundle ports and protocol handling (logic and state) in interfaces that can be instantiated in modules. You can still shoot yourself by "progam in Verilog", that is forgetting that you are in fact describing physical and electrical reality, but you can write code with less errors.
Ome thing I see SW people not understand is that the toolchains for ASIC development are much biggers. You have simulators, compilers, linters, floorplanner, signal integrity analyzers, detailed Place & Route, several levels of formal verification tools, test insertion and test automation tools etc etc. All of these tools need to parse one or more source files. And all these tools need to parse the source files in the same way. This, in combination with the possible HUGE cost (money and time to market) of taping out a chip that requires a respin, change of one or or more of the masks in the maskset are big reasons why the industry is very conservative to radical changes in language. The subset of language features that are safe to use (i.e. correctly parsed and usable through the complete toolchain) is often surprisingly small. And to be honest, describing the HW you want is often the least hard part of chip design
Thinking that the chip industry and associated EDA industry simply don't know CS and modern tools, SW paradigms probably means missing what is actually hard when designing chips.
A final note on functional languages - as you state "It's no news that Haskell can be very hard to get into for even experienced software engineers." This is also a possible reason for lack of adoption. You basically reduce the set of potential engineers. And finding good digital, SoC and chip designers is already very hard. After seeing several attempts and cool ideas I'm quite convinced that the subset of engineers groks and likes functional languages and groks and likes electrical engineering, digital and chip design is too small to scale to meet industry needs.
Rant off. I'm accepting that yes I'm probably in the "has no idea what higher order functional programming with advanced type system is on any level" camp.
Higher order functional thinking is such a ridiculous superpower, but it's also really hard to do correctly without some programming tool helping you along and tons of experience.
In your case, any piece of hardware can effectively be abstracted by some top-level function that takes arguments of time, inputs, outputs, environmental variables, etc.
Mastering higher order function development is how one can properly address complexity in any domain.
SQL views are another good example of this. In fact, you could very likely build a competent digital circuit design system around SQL ideologies. SQL is the best tool I've ever found for modeling and applying constraints.
SV and VHDL are nasty languages, but as you said, they are "industry standard". Chisel and Haskell-based approaches are better but virtually nobody adopts them.
I tried to go in another direction, to make design code shorter by using Clojure syntax. The result is here: https://github.com/m1kal/charbel and works for simple modules. I don't expect wide adoption, but we need to look for new directions instead of sticking to the methods and languages from the 80s.
Writing performant DSP for an FPGA involves using intrinsics/structures specific to the device. A multiply and accumulate for example. Can you easily take advantage of stuff like that in clash? What about using the existing device dependent IP linraries? I agree verilog/vhdl are fairly dated, and not designed for how we use these devices today. An increasing number of people I talk to are generating their designs from hdl coder or model composer type tools.
> So that basically leaves the intersection of people that are both enthusiastic software engineers with an affection for e.g. Haskell, and also enthusiastic in building hardware. But outside of my own projects, it just leaves me longing for the world that could exist.
It seems to me that a shop made exclusively of people with this background would be massively more productive, and quite an attractive place to work for anyone with the relevant skill sets.
What keeps such a shop from existing? The need for domain specific knowledge? Or does scale require too many people? Are hardware design shops usually in house and thus harder to out-source? General worries about communicating with customers expecting verilog? Worries about being able to on-board new people?
Honestly, despite all of the above, I think a haskell based FPGA quick itteration hardware design shop sounds amazing. If I had any kind of experience with hardware design I would jump at the chance to join. Even without that experience this sounds worth a shot, but I fear that lack of experience might be why I think this is feasible.
The size of that intersection. And the need to at some point interface to the rest of the world, including the EDA world.
Unless you build your own products from system to physical chips in house (have your own fab) sooner or later you will need to interact, exchange source code or netlists in some form. Fabs expect you to use certain (golden) tools before accepting a design for manufacturing.
I'd be interested in trying one of these more modern languages, but everytime I see a basic example it looks clunky, as rather than write a new language for hardware, it's 'bolted on' to a language for SW and sometimes this looks inelegant. Maybe it's just my bias,but often looking at baisc examples they look overly complex to my SystemVerilog addled brain.
Also, how are multiple clock domains handled, complex types, fixed point arithmetic, are those better/safer?
If you have any examples I'd be eager to see them.
Verilog definitely sucks but I think the problem with new HDLs (Clash, Bluespec, Chisel etc) is they don't necessarily make the hard bits of hardware design easier, they just help with the tedious stuff that whilst annoying ultimately doesn't take up much of your time.
For example a good type system definitely makes module interfaces cleaner, saves you having to dig through warnings/lint reports to find stupid errors and in general makes in easier to rapidly build a new system out of IP. However for your typical hardware project you're not normally wanting to radically configure things or continuously build whole new systems. You have a fixed or slowly evolving spec so when you first put things together and occasional need to add or change blocks there's a bunch of tedious error prone wiring to be done but ultimately improving that process makes your job easier but doesn't open up radical new ways to do it.
Significantly more powerful generation/parameterisation capabilities is another thing new HDLs can excel at. However for anything sufficiently complex (e.g. a cache) building something scalable that functions correctly and performs well across a broad parameter space is just incredibly hard. Perhaps a new HDL lets your build a wonderful crossbar for some interconnect protocol (e.g. AXI) for instance where you can have arbitrary numbers of ports, arbitrary data widths or each, clock domain crossing etc etc and it just handles whatever you throw at it and you get some nice elegant code that generates it all too. Though depending upon the actual configuration you want you'll want very different micro-architectures for it. The one size fits all approach will be a one size fits this corner of the parameter space in reality and if want a truly one size fits all you'll find your initially nice elegant code ends up with a bunch of special cases all over it to produce optimal designs. Also in reality you don't need something super flexible for all cases in any given project, building the thing your specific use case requires works fine. Then for the next project you can adapt it. A tedious process you'd love to see improved? Sure. One which is holding you back from doing amazing things? That I'm less convinced off.
There's also the downside in that it is important to have a reasonable idea of the circuit you actually produce and this is generally where the hard stuff happens. You have tricky timing paths to deal with, power issues to sort out, area to reduce etc. If you're working on secure hardware (like I do) you've got side channel and fault injection attacks to detect and defeat. Doing this requires a deep understand of how the HDL you're writing becomes standard cells on the chip.
With a new HDL mentally mapping some output of an implementation tool back to the original HDL can be very tricky and consequences of code changes can be surprising. This makes the hard stuff harder.
Ultimately hardware is not software, there's a different set of constraints you're working to and a rather different end product. There's plenty of good stuff to take from the software world to apply to hardware but it doesn't all just map across cleanly.
Of course some of the problems I talk about above can be solved by tooling, they're not inherent to the languages. Still that tooling needs to be created and may be very hard to build (how many times have you seen someone claim something will be amazing just as soon as the tools exist to make it useable?).
I think my perfect HDL right now would look rather like SystemVerilog but with a decent type system and restricted semantics so unsynthesisable (or synthesises but not into a circuit you'd ever actually want to build) code simply won't build along with improvements around parameterisation and generative capabilities.
Taking a step further from there is also perfectly possible but I think we need to do a lot of work around tooling and verification for more flexible designs first to understand how to do that well.
I do need to spend more time with new HDLs. The last serious project I did in one was a CPU (well two CPUs but both dervied from the same code base) in Bluespec around 10 years ago for my PhD. Bluespec has an opensource compiler now plus there's various languagues to explore. Maybe I'll try building a RISC-V core in each and seeing how it goes.
I've tried pretty much every 'NeoHDL' out there, and so far Bluespec was the one that actually made me feel like it's a step in the right direction, instead of more of the same.
Its atomic transaction based modelling maps superbly well into hardware clock cycles, and after using it for a while you learn to visualize quite well what shape of RTL will be generated. Plus its standard library provides some really powerful _and practical_ interfaces/implementations that make building things fast and safe. Oh, and a lot of things you would generalize with fairly heavy-handed and bespoke abstraction in things like Chisel/Spinal/Migen/Amaranth just map directly into Bluespec's type system constructs, making interacting with existing codebases much less of a reverse engineering effort.
The Bluespec SystemVerilog syntax is a bit... quaint, but if you're used to (System)Verilog you know the deal. I personally want to spend some time soon learning the Bluespec Classic/Haskell instead, but that's mostly because I already know some Haskell...
Overall, highly recommend giving it another shot. It made me enjoy writing HDL code again after years of frustration with other offerings. It's one of the few languages out there that actually make me feel 10x as productive thanks to the strict type/runtime semantics.
Yes I did quite like bluespec, though after my time using it I drew two conclusions.
1. The extra power I had to build parameterisable/configurable designs didn't help as much as you'd hope as I said in my earlier post. Making something that can deal with a large parameter space still remains hard, the easier things look a lot nicer and are less fiddly to work with but on a practical level you're not gaining much.
2. One of bluespec's big ideas is you write the rules, with conditions and it works out the rest. The problems come when this maps to verilog and you need to start fixing implementation issues. I ended up having to develop good intuition around exactly how the compiler would schedule the rules so I could avoid issues (e.g. combinational loops). It felt like bluespec really pushed you into deep pipelining, rule does one thing result gets flopped another rule will take over next cycle. Fine for some designs but sometimes you've got a whole bunch of stuff you want to get in a single cycle (e.g. within a CPU) and there I felt like the rules construct ultimately just made it harder to build anything.
I do remember discovering some 'hidden' language features. Knowing it all turned into Haskell underneath via term rewriting I tried some syntactic constructs that weren't documented but I felt should work and often found they did! This gave some fairly powerful stuff to play with though never good to rely on undocumented features.
As I said it was over 10 years ago so my memory is hazy, things may have improved since there as has my skill and experience. I should really take another look now the compiler is open.
Agree Bluespec is great. In college we went from 0 to a Bluespec multicore RISC-V processor running Linux on an FPGA in a single undergrad semester, every step along the way felt intuitive.
Do you have a pointer to Bluespec Classic (not the "modern" Bluespec)?
I would make a distinction between "generator HDLs" like Verilog, Lava, Chisel, MyHDL, nMigen, etc and languages that actually are interpreted as logic (Verilog, Bluespec, Clash, Silice, Handel-C, ...). Yes, Verilog can do both and is a ghastly language. The former class doesn't really raise the abstraction, it just makes generators easier to write.
As I've been saying for decades, I really want a language in which it is as easy to write an implementation as it is to write a [timing] software model; ideally I can use the same language for both! I know it can be done and I have been trying to scratch that itch a few times.
Wow, didn't know that Bluespec was open sourced. I used in my Computer Architecture class for assignments 6 years ago, and my experience was that it was much much better than Verilog (the type system was far better) but there was too little learning material/documentation out there on the Internet.
bsc, the bluespec compiler, has been open sourced under the MIT license. It's the real deal, the same bsc that has been used in the industry for years now.
I've been using it in my own personal projects for something something like two years.
> Yet Haskell proved me very wrong on that, so much that it was almost like re-learning programming. The reward is immense, but you have to really want to learn it.
I've given it 3 attempts and each time I get closer to understanding how to use Haskell IRL, instead of toy things. Turning it into reality with all the different... styles or paradigms?.
Looking forward to my 4th attempt, I think I'll get it next time! Just need the spurt of motivation.
It's pretty obvious that Verilog and VHDL, modeled after C and Ada respectively, both imperative languages, follow a drastically mismatched paradigm for hardware design, where circuits are combined and "everything happens in parallel". It becomes even more obvious when you have tried a functional alternative, for example Clash (which is essentially a Haskell subset that compiles to Verilog/VHDL: https://clash-lang.org).
The problem is, it is hard, if not downright impossible, to get the industry to change. I have heard many times, in close to literally these words: "Why would I use any language that is not the industry standard". And that's a valid point given the current world. But even for people that are interested, it might just be hard to switch to something like Clash and not give up pretty quickly.
Unlike imperative languages, functional languages with a rich modern type system like Haskell are hard to wrap your head around. It's no news that Haskell can be very hard to get into for even experienced software engineers. In 2005, after already having more than a decade of programming experience in C, C++, Java, various Assemblers, python (obviously not all of these for the same time) and many other languages, I thought any new language would mostly be "picking up new syntax" at that point. Yet Haskell proved me very wrong on that, so much that it was almost like re-learning programming. The reward is immense, but you have to really want to learn it.
And to my surprise at the time, when I got heavily into FPGAs, the advantage proved to be even stronger when building sequential logic, because that paradigm just fits so much better. My Clash code is much smaller, but also much more readable and easier to understand than Verilog/VHDL code. And it's made up of reusable components, e.g. my AXI4 interfacing is not bespoke individual lines interspersed throughout the entire rest of the code. That's mainly because functional languages allow for abstraction that Verilog/VHDL don't, where often the only recourse is very awkward "generated" code (so much so that there is an actual "generate" statement that is an important part of Verilog, for example).
So by now, I have fully switched to using Clash for my projects, and only use Verilog and VHDL for simple glue logic (where the logic is trivial and the extra compilation step in the Verilog/VHDL-centric IDE would be awkward) or for modifying existing logic. But try to get Hardware Engineers who probably don't have any interest in learning a functional programming language to approach such an entirely different paradigm with an open mind. I've gotten so many bogus replies that just show that the engineer has no idea what higher order functional programming with advanced type system is on any level, and I don't blame them, but this makes discussions extremely tiring.
So that basically leaves the intersection of people that are both enthusiastic software engineers with an affection for e.g. Haskell, and also enthusiastic in building hardware. But outside of my own projects, it just leaves me longing for the world that could exist.