top of page
Search
  • Writer's pictureQuinn Jacobson

If You're Not Doing Hardware/Software Co-Design, You're Doing It Wrong!

Updated: Jun 23, 2021

Quinn Jacobson, SiPanda Hardware Architect, Thursday June 24, 2021


One of SiPanda’s founding values is that the hardware and software making up high-performance networks I/O solutions need to be “co-designed” to ensure that they meet customer needs. So what does it mean to co-design a solution? Before we jump into that topic (our next blog), let’s talk about what co-design is not – static hardware/software partitioning.


System design revolved around partitioning a problem (and system that solves it) into smaller and smaller pieces that individual teams could work on independently. Historically, the first partitioning is usually between the hardware and software, with a rigid documented contract between the two. When problems are static and well-understood, this approach can work well. It allows teams to focus on their area of expertise and execute independently of each other.


However, statically partitioning hardware and software can lead to huge inefficiencies, and even failures, due to the presence of one or more factors:


  • Unknowns: If parts of a problem are undefined or poorly understood in the original planning phase, there is a risk that the “onion-peeling” of problem/solution definition will (sometimes radically) upset the partitioning that has already been done. The same is true if there are significant portions of the solution that utilize new or unproven technologies. Too often teams stick to the plan and continue to build their part even when it should be obvious that what they are doing no longer makes sense. This leads to overly complex, inefficient, and even nonsensical, solutions to what should be simple problems.


  • Problem Evolution over Time: This is similar to the “unknowns” problem, but the problem definition changes over time as the solution must support new needs. This can (to some extent) be accommodated by building in “reserves” (unused processing or storage resources, for instance), or planning moves to new process steps (bigger, faster ASICs and FPGAs over the life of a solution). Often teams stubbornly stick to a partitioning of the problem that is grossly outdated


  • Changes in Related Systems: · One should frequently stop and ask themself if the assumptions behind an approach are still valid, otherwise you are at risk of solving the wrong problem. Distributed computation has typically had three large “blocks” – computing, storage, and networking. Since the early 2000s and the introduction of virtualized computation, storage was the slowest link in the equation. This changed with the introduction of flash storage (especially NVMe storage), which by around 2018 moved networking to the “slowest part of the chain”. This pattern is repeated on an often-predictable basis, though the causative factors or technologies for these transitions are not often known in advance.

All of these tend to make solutions based on a rigid problem/system partitioning fragile and brittle, a trend that is only increasing over time. That is why co-design, which uses a “dynamic” contract between hardware and software that evolves as needed, is so critical – if done well, it allows adaptation of a solution architecture to risks, unknowns, and ecosystem changes. We will look more in depth at this in our next blog.


SiPanda was created to rethink the network datapath and bring both flexibility and wire-speed performance at scale to networking infrastructure. The SiPanda architecture enables data center infrastructure operators and application architects to build solutions for cloud service providers to edge compute (5G) that don’t require the compromises inherent in today’s network solutions. For more information, please visit www.sipanda.io. If you want to find out more about PANDA, you can email us at panda@sipanda.io.

120 views0 comments
bottom of page