Lots of Rust today – as we wonder (or rather – the community wonders) if this is the moment for it to completely replace C++. Aside from that, a new “The State of DevOps”, the public launch of Dall-E, and controversy around Make-A-Video from Meta.
1. Is it time to put C++ away?
Rust has had a good run lately. And that’s both on the side of actual achievements and public relations.
It has been rumored for quite some time that Rust may become the language officially used in the development of the Linux kernel. The word “may” in the previous sentence can already be crossed out, as Linus Torvald himself has spoken on the subject. He made it clear in his conversation with ZDnet editors that “Unless something strange happens, Rust will make its way into 6.1.” Interestingly, it turns out that the last problem to be solved here is not the language itself at all – about which no one seems to doubt that it will be a solid addition to C anymore. The last problems are related to the “tools” part. With a project as large as the Linux Kernel, introducing an additional compiler into the build process is not easy. However, even here, Rust has an ace up its sleeve – for it is supported by the popular C compiler Clang, which should dispel many doubts.
The fact that Rust will sooner or later come to Linux has long been rumored. However, the language’s position has been further strengthened by Mark Russinovich – CTO of Azure. He spoke about the future of C and C++ – the current leaders in the systems engineering space – and stressed that from his perspective, it is hard to motivate their use in any new project anymore. Of course, Rust automatically appears as a potential successor, given the involvement of companies such as Amazon, Meta and Google. One after another, they declare their commitment to the Rust ecosystem and present actual use in their production systems. Admittedly, Google still has its Carbon, which has been in the news recently, but it probably has a long way to go if it wants to dethrone Rust from its position as the most loved technology. The only chance for Carbon seems to be if people start actually using Rust. That’s when Google’s project will probably start hitting the top of the Stack Overflow Developer Survey.
And since there’s been so much about Rust, it’s hard not to mention that last week saw the release of its new version, 1.64. In it, you will find not only further work on improving asynchronicity (the aspect where the language faces the most criticism), but also better compatibility with C code. In addition, the developers realize how important it is to provide programmers with out-of-the-shelf developer experience, so they have set up a Rust Style Team to take care of updates existing and create additional formatting rules for the language. Clearly, the Rust developers realize that the stars have aligned in a favorable constellation for them.
- Programming languages: It’s time to stop using C and C++ for new projects, says Microsoft Azure CTO
- Rust 1.64
- Announcing Rust Style Team
2. The State of DevOps 2022 takes on security issues
And now, in a lengthy introduction, let me explain why The State of DevOps is such a critical part for the industry.
One of my favorite talkies of recent years is the Illogical Engineer, delivered by Paul Szulc. If you haven’t watched it, Pawel shows the problems associated with the fact that, as an industry, we haven’t developed patterns of behavior and real rules, not just so-called “good practices”. This is because being a “Software Craftsman” implies forever using tribal knowledge and relying on what we as journeymen learned from the master in the early stages of our careers.
I think this is due to very specific reasons. After all, the process of software development is chaotic and hardly susceptible to any empirical testing – it’s hard to make two systems with two different approaches and verify which was better. At the same time, however, there are people who are trying to bring some order to this ecosystem for us. Items such as Software Design X-Rays attempt to synthesize various scattered scientific studies and try to draw more universal conclusions from them
Another representative of this category is the book Accelerate, which is currently one of the most respected items describing the creation of an efficient development process. The book proves, using statistical methods, that certain factors distinguish effective teams from mediocre ones. Their observations have become so respected that it’s currently hard to find any list of the best IT books that don’t include Accelerate – which is probably a good recommendation for reading. And – I’m finally heading to my point – its authors based their observations precisely on The State of DevOps survey they conducted.
And just when I have highlighted the relevance of this study – at least I hope I did that – we can finally move on to the report itself and its latest edition. This time the main focus of the report’s creators was the so-called Software Supply Chain, specifically its security. The conclusions are very interesting – it turned out that the most significant predictor of an organization’s good security practices when developing applications was cultural, not technical aspects. This is because it turns out that it is not strict security rules and iron enforcement but a culture of high trust and an appropriate approach to non-blaming Post-Mortem (and therefore a greater sense of security) that works better. This won’t surprise anyone who follows “thought trends” in the software development world. Still, it’s nice to have it backed up by data, even if we’re talking about a report like The State of DevOps, which focuses on statistics and correlation rather than strictly defined outcomes.
I highly recommend reading the new version of the report, and even more to look at Accelerate (if you haven’t already).
And while we’re at it with Google (because DORA, the institute that created The State of DevOps, was purchased by this very company a couple of years ago), I can’t help but mention here the big event of the announcement of the shutdown by the “big G” Stadia. For those who don’t know the newest Google victim, the project allowed streaming gaming in the cloud. Fortunately, this does not mean the end of similar initiatives, as competition in this market is fierce. To all those who trusted Stadia, however, I wish them a nice farewell party.
3. Wave of criticism pours on “Dall-E for Video” from Meta. Why?
Yes, my darlings, again. Well, because how not to write about AI Synthesis, when there is so much going on….
Let’s start with (yet another) one of the year’s biggest launches. The Dall-E 2 is the model that started all the craziness we are dealing with, but in recent weeks it has been somewhat “overshadowed” by the competition. The reason is clear – Stable Diffusion was completely given back to the community, and Midjourney was relatively quick to let people play with their model. Dall-E, on the other hand, remained firmly closed, maintaining a waitlist. That is always quite an effective disincentive, especially when we want everything “fast” and “already.” Now the last barrier has fallen (aside from providing a phone number, as this is required for the registration process) – anyone interested can try how the OpenAI tool compares to the competition.
But Dall-E is something everybody already knows. This week, several more projects were released. The one that certainly got the most attention was Make-A-Video from Meta. After all, while the solutions presented earlier generated mostly static images (although Stable Diffusion, for example, began to be used to create simple animations), the solution from Meta AI is used to generate simple animated gifs. Admittedly, the effects in terms of refinement are not as impressive as the current static image generators – they are rather on the level of the first Dall-E (or even a bit lower). However, we all realize that after Make-a-Video will come some form of Make-a-Video 2, so what we currently see is just the first brick. Like the DreamFusion published by Google researchers, which creates 3D models from static images and prompts.
Charming as all that is, the new project package has resurfaced old controversies. Well, it turns out that a dataset called WebVid-10M has been used to teach Make-a-Video, which mostly consists of…. scraped videos from Shutterstock. And while this remains in a kind of a gray area in terms of legality, there is no shortage of opinions that all the “scientificness” of these projects serves more to pull responsibility from the world’s largest corporations. After all, everything happens in the name of science!
We are moving in a very “gray area” of ethics here, and the problem is likely to only intensify in the coming years. After all, licensing laws are not well aligned, and AI topics are developing too fast for the realities of the legislative process. If you want to explore the topic further, an excellent article on the subject was published last week by Andy Baio.