53 Comments
Sep 10Liked by Sarcastosaurus

Thanks for addressing this critical topic. As for chip fabs, the "hope" or maybe it's "hype" is that the TSMC plant in Arizona will be a powerhouse "in 2025"

https://www.bloomberg.com/news/articles/2024-09-06/tsmc-s-arizona-trials-put-plant-productivity-on-par-with-taiwan

Expand full comment

I wouldn't count on the TSMC plant in Arizona amounting to much of anything. The Taiwanese discovered that the US lacks workers willingly to work the gruelling shifts, and lacks the knowhow for such production. It will take years and a great expense to get it done. For now, they just use their own staff from Taiwan, but the Arizona site is a "secondary" site that TSMC is not putting huge hopes on.

Expand full comment

Respectfully disagree. The 65 billion they have invested in the plant as a single largest foreign investment in US history period 6.6 billion from Uncle Sam says that both parties are serious about it

Don't forget global foundries which is the number 3 company in the world even though it's only 14 years old.

It's based in New York and Over half of their plants are here in the United States

To be number 3 in the world means they must have some know how and skilled workers from the US to make it happen

Expand full comment
Sep 12·edited Sep 12

65 Billion is not really alot for the industry. And also think of it from Taiwan's perspective - there is no benefit really for Taiwan to move its production to the US. One of the only reasons that Taiwan is important to the US is because it controls TSMC and thus worldwide chip production. If chip production moves to the US, TSMC Taiwan loses its relevance making itself more vulnerable to Chinese invasion and the US would be less likely to protect it then. If I understand that, so does TSMC.

As for 65 billion not being alot, compare the $1 Trillion that the Chinese invested into their chip industry last year. SMIC (Chinese equivalent) had over $100 billion invested in it alone.

Most of Global Foundries fabs are not in the US, they have other services here like packaging. They rely on TSMC or partial production in Korea/China.

Expand full comment
Sep 12·edited Sep 12

Here is an article that discusses it in more depth:

https://www.businessinsider.com/tsmc-taiwan-semiconductor-phoenix-arizona-factory-chipmaker-job-qualification-debate-2023-8

Basically TSMC is unhappy with the skills of American workers and their inability to work long hours. So it had to "import" many workers from Taiwan to get the job done.

Not to mention the many other problems that plague the TSMC Arizona site

https://restofworld.org/2024/tsmc-arizona-expansion/

Expand full comment

I know they've had construction delays, but I'd think those plant construction workers are not the same ones who will be making the chips

Lucky for us, the Chinese investment in chip tech only closed the gap. They are still 3 years behind Taiwan.

TSMC are the cherry on top, but the whole pie is Asia, which China seeks to dominate and thus control 50% of rhe worlds GDP

The small island democracy is under existential threat from China. It's in their interest to diversify

The Quad will resist this, ensure security in the S China Sea, and undermine Chinese ascendancy in every way possible.

Expand full comment

Read the article I posted. It's not just about the people building the plant, there is a whole host of other problems with US engineers, forcing Taiwan to rely on their own talent.

The Chinese were 5 years behind a year and a half ago and are now three years behind. In a few more years they will be on par or even ahead.

China doesn't want to dominate Asia - the problem is that it feels threatened by the Trade War and Taiwan/US being too close to cutting off its trade flows.

But here's the rub, if Taiwan "diversifies" it will no longer be defended. The only thing that's stopped Chinese invasion so far is their reliance on chip production in Taiwan. TSMC kind of shot itself in the foot twice - first by complying with US sanctions and second by moving a fab to the US.

As the article states, "TSMC expects lower profits from its Arizona facility and only needs it to ensure that the US government doesn't turn against it and to secure CHIPS act funding"

The Quad doesn't care unless the US pushes them. Its mostly a US endeavour.

Expand full comment
Sep 10·edited Sep 10Liked by Sarcastosaurus

Since April 2024 I can't buy video cards on ebay.com (Account from Ukraine) I get this banner.

"Problem with bid

Attention buyer

The item you are trying to purchase is not available for purchase in your location. You cannot buy this item."

Support says that these are US government sanctions against Ukraine and the sale to it of anything that may be related to AI

Until April 2024, I bought more than 200 video cards for repairs there.

Expand full comment

There are some companies that are lazy and just assume that sanctions on occupied Ukraine extend to all of Ukraine. I've run into this before. I'm not sure if that is what eBay is doing or not. Or, if this is part of Chips Act legislation.

Expand full comment

>> "Here are some Ukrainian military uses for AI you might not think about. Maximizing administrative tasks. Like invoices and payments. Maximizing logistics. AI could decrease the time it takes weapons and gear to get to the front. Creating pro-Ukraine propaganda. You’d be crazy to think this isn’t happening already. Russia certainly does it. Creating predictive models of Russian movements and attacks. AI is just a very complicated probability/prediction engine."

- Ukrainian military cannot get over its bureaucracy to optimize its inflexible rules and procedures which originate from the Soviet times. It has no place for AI as any admin task generates tons of papers that need to be approved by all the levels of commanders.

For drones and robots, which is the main AI application in combat, there are 3 levels of autonomous operation:

1) Target tracking. An operator flies the drone high enough to avoid the enemy's EW. The operator sees and recognizes a target, selects it on his screen and the selection is sent to the drone which "sees" something like "a gray rectangular object (tank) on white background (snow)". The drone flies into the thing that was selected without much "thinking" what that really is - it's just an area that differs in color from its surroundings. This algorithm is around since about 1970s and does not require any AI chips - it works well on DSPs like those of first-generation digital cameras. Actually, this is how Javelins work - and they are ancient. Target tracking is implemented in Russian Lancets and very few of Ukrainian FPV drones. We can only dream of getting it more widespread as it nullifies close-range EW.

2) Image recognition. The drone can discern a tank from a house. This is required for long-range drones to make them GPS-independent (as GPS is hacked by both sides). That does require a decent GPU or TPU on the drone and some machine learning, but the main task is navigation (mapping the terrain that the drone observes to photos from Google maps), thus no combat footage is needed to teach the AI. This is what we have in those Ukrainian drones that hit the Russian refineries, and this is inside Storm Shadow missiles. Another application for image recognition in looking for enemy vehicles on footage from high-flying reconnaissance drones and satellites. That does rely on learning from combat footage, but does not need an AI chip in the drone - the drone sends the video to the cloud, where it is passes to the AI to look for tanks.

3) Situational awareness and decision-making. That's the "true AI" of Terminator (or robo-dog) which is forbidden to be used in wars by the international conventions. However, if the ongoing war lasts for few more years, it will evolve. It's absolutely immune to EW. The operator tells the drone to "kill anything it finds 5 km to the south" and the drone goes south, finds the most valuable target and destroys it. This is also required for assassination drones to kill target politicians during public events.

>> "Ukraine should stockpile AI chipsets that it needs not only to deploy weapons but to also train models."

- Ukraine is very low on money. It lives hand-to-mouth, has recently defaulted. Soldiers buy a lot of equipment with their salaries or rely on volunteers, but everybody who used to donate has already donated all their savings, and the salaries are now lower than before the war, thus the volunteers don't see much funding as well. No money for immediate combat needs, not to say about stockpiling expensive chips, which will sell in half a price in a couple of years.

>> "Ukraine should use its current strategic advantage and become vertically integrated into the AI chip supply chain: design, training, and eventually manufacture."

- That simply does not exist. It has always been hard to smuggle high-end or dual-use equipment into Ukraine as the country is not a part of NATO, thus is not trusted by the equipment manufacturers. The Ukrainian customs also used to arrest anything they considered to be expensive to get a bribe for releasing it.

Expand full comment

I would also add that it's always a treadeoff to make some expensive and precise solution over cheaper and more volumes. I.e. for short range FPV drones it's better to have some cheaper solution with simple image processing. Or even it may be better to have many dumb drones - depends on the situation. EW may be overcame in different ways also.

Expand full comment

A DSP for tracking is cheap ($10 or so). An embedded processor for image recognition is in the range of $200 to $2000, depending on its power (which also means higher energy consumption). Situational awareness is still an early-phase R&D worldwide.

Expand full comment

AI based solution for FPVs is like 150-200 usd

Expand full comment

And cheap FPV without AI and without warhead? My guess the price is like $400. So, AI adds about 30-50% to the price.

Expand full comment

It depends: 8” are 400+, 10” - 500+, 12” - 650+

Expand full comment

It makes no sence to install AI elements on 7”/8”, only on 10” and bigger

Expand full comment

Only 10$ for a DSP with an optical tracker on board? Which DSP and algos are you referring to?

Expand full comment

If drones are united as a multi-layer system, then target tracking (plus really flight control and visual navigation) could be done by simple FPV Edge AI, yes it would cost an extra 100$. But would drastically reduce requirements for pilot training, and also would make it possible for them to work remotely. As no real time video feed needs to be transmitted. Then more complex target identification algorithms, say requiring 500$ investment, could be put on fixed wing recon platforms (would rather do primary identification). And situational awarenes and analytical recon, can work on the central system like Delta (and also secondary identification robustness check). Situaational awareness and decision making support Not actual decision making, as it is not required, there is no lack of human decision makers, plus many current pilots in such way can upgrade and manipulate swarms instead of single drone. That is not that difficult to make, though it would be more of algoritmic work than AI. Like drawing supply line vectors, application and distribution of forces. Because each activity on the front line has some activity behind the front line supporting it, find that connection, crotical points of it, concentrations, hubs, etc. would be taskfor such analytical support of decision making.

Expand full comment

I'm afraid that large fixed-wing drones will follow Bairaktars

Expand full comment

By fixed wing I mean things size of Orlan-10, and then it should be hundreds of them in the air at once. They may split the jobs in terms of sensors they carry. FLIR, SAR, RF, Optical, Sound. Some of them may carry weapons of protection (instead of sensor payload) -- a droppable FPV. So any threat from medium short range SAM could be identified and destroyed. They should fly above manpad range, say 3.5-4 km. And they should be small enough for long range SAM to not be able to detect and/or shoot them down.

Expand full comment

We've already seen FPV drones hitting fixed-wing drones. And there are prototypes for multiple use drone hunters.

It will take a lot of time to design and mass-produce enough specialized fixed-wing drones to have "hundreds" of them over a single sector. By that time the enemy is expected to deploy 10x quantity of anti-drone FPV interceptors or multiple drone hunters.

Expand full comment

Yes, you are right. But this is only because they constantly emit the video and control signal. Of specific "signature". Which can be detected by any "passive" radar. Passive means it is not detected by enemy. Active is instead sending a signal forward and a reflected signal gives a picture of aerial environment. If fixed-wing stops emiting signals, go into silent mode, it would require an active radar, that would be detected from afar and will get a ballistic strike.

The only way for fixed wing to stop emiting, is to preprocess information on board, and the transmint in short digital pulses. Giving to the central post information about identified targets and their coordnates only. Maybe some extra photos for manual confirmation. This reducing to minimum RF communication will increase survivability considerably.

Also consider such fixed wings having passive radars on boards. Since operator of FPV is emmiting control signal, it is also possible to detect their (and triangulate with several fixed wings) location and send either additional recon there, or a strike.

Expand full comment

There are many operating Russian SAM systems which are not destroyed by ballistic missiles for some reason.

Expand full comment

Do some reconnaissance UAV use annotation AI in the operator device (notebook, tablet)? I.e. UAV sends video stream only and the annotation is performed on the receiving side. Or it is necessary to process the video stream on UAV chips always? If so, what are the reasons behind that?

Expand full comment

The recognition can also take place on the notebook by transmitting the data from UAV instead of relying on it being pre-loaded. The coordinates of the UAV are known, and through video recognition, the coordinates of the target can be calculated on the notebook. The UAV can then be sent to these coordinates. For stationary targets, this is sufficient. However... On its way to the target, the UAV can lose connection with the notebook due to interference and the target may change direction during this time. As a result, no synchronization with recognition occurs, leading to failure. While recognition is integrated into the UAV, it can correct itself visually as the target is clearly recognizable. With fast chips, correction can synchronize quickly, but with slower chips, it takes longer and can thus lead to failure

Expand full comment

Thanks for the reply. But is recognition on the operator device used in real combat?

Expand full comment
Sep 10·edited Sep 10

Vanilla laptops are too slow for real-time recognition. That requires a powerful video card (which uses a lot of energy) or an external TPU processor, like https://coral.ai/products/accelerator/

Expand full comment

Yes, you must have proper device and it consumes more energy. But such configuration may have advantage in saving UAV battery (i.e. longer flight) and cheaper UAV price - one such operator device may be used for more dumb UAVs when they are destroyed or damaged.

So, I am just curious, if such configuration is used in real combat or is better to implement AI annotation into UAV always?

Expand full comment

If there is a data link from a UAV to the operator's laptop, there is much more likely a link from the laptop to the cloud - thus, ideally, the image recognition can be done in the cloud. The UAV needs to do something (tracking, recognition, decision-making) on its own only if it can attack targets and it cannot connect to the operator.

Expand full comment

Here is best to say that recognition is done in a central server and results are quickly appearing on local tablet, laptop, computer. Because we assume that most of those are having fast internet connection for semi-realtime battle maps. But this assumes a video/photo stream fromtge ednge device. Which becomes more and more problematic, as those devices become targets for EW or anti-air FPVs. Fiber-optics FPVs are a temporary solution and a dead end. It is inevitable need for Edge AI.

Expand full comment

A lot of annotation is done by 3rd parties after the fact. There are new startups that do this now.

Expand full comment

In realtime combat usage? Or do you mean some delayed postprocessing?

Expand full comment

I assume AI can be used to annotate in real time. But I don't know why it would for the UAV example. Annotation is part of the model training process. Not necessarily part of the inference process. "Inference" as I understand it is the query part of the process where the already built model is asked/queried, in the case of computer vision, "what am I looking at?"

The models are built and trained, then the useful part of the model is uploaded to the end user chip. In this case, the UAV. Then inferences are run on that model. (is this a tank?) Inferences use much less compute than model training. Hence, the use of AI chips that are basically GPU chips with additional neural processing units added to run AI functions. Either on the actual chip itself (more expensive) or added to the chipset/board. A good example of this is a chiplet added to a board to code and decode cryptography. It stays off the main GPU.

Expand full comment

I learnt a lot from that common sense explanation of where things are at computing AI and GP wise!. Thanks!

Expand full comment

Thank you.

Expand full comment
Sep 10Liked by Sarcastosaurus

Thank you for this topic!

Yes, indeed many things are well described, including the need for Edge AI.

However, I would disagree with the conclusion on chips.

Firstly, chips are only required for Edge computing, because model training is nowadays mostly done in a cloud environment involving clusters of specialised compute resources. It could be done locally as well, but as of today I do not see any reason for expedient capacity building in that regard.

Regarding Edge devices, chips, SOCs, SOMs . Firstly for our readers to understand, those are a kind of very miniature versions of big computer with GPU and all. Those may vary in cost from say 50$ to 750$. And they allow only "execute" the algorithm (what we call AI nowadays is really probably a fancy statistical algorithm/model), that was created on "big" computers.

But to have really need to stock up those, firstly they need to be used. And there should be a doctrine for that. And quite sadly (or happily, depends on ones point of view), it seems ZSU does not have a need and doctrine for using those. And by reading between the lines these interviews:

https://zenitha.substack.com/p/interview-with-kalinin-5-jul-2024

https://www.youtube.com/watch?v=RHS6NQ7rqmY

I would say there is no hope of any change this year, or even the next.

For the example of possible doctrine you can read this:

https://zenitha.substack.com/p/permanent-reconnaissance-interdiction

Expand full comment
Sep 10·edited Sep 10Liked by Sarcastosaurus

I highly doubt that would present itself as a problem for two reasons:

1. What we currently call AI is quite ineffcient in terms of result vs computing power. The corporate world is currently trying to find usages for it, and I have no doubt militaries are trying as well. But the problem is AI is fundamentally "dumber" than more traditional non-AI algorythms - it produces less reliable result with orders of magnitute more computing power than an older non-AI algorythim would.

So on one hand I think Ukraine probably has access to better non-AI software that does not need these chips, and on the other- putting AI chips in drones would be terribly inefficient. Even if, due to some miracle, someone finds a way for AI algprythims to perform marginally better than non-AI (which is already a miracle in itself, as AI performs even worse if faced with an unexpected situation) the cost of putting at least 1000 times more powerful chip in the drone would not be justified.

The core problem is that most people do not understand how good non-AI algorythims can be and how much they can achieve, and are already achieving. So while it's true that AI is new and has potential to be used in ways we don't yet understand, most of the usage people imagine for AI is in problems that have already been technically solved, and AI would just provide a less efficient solution.

2. In terms of potential shortage, if Taiwan factories go down right now Ukraine and AI would be the least of our problems, as the entire world economy will grind to a halt. While it's true some lower tech chips are manufactured outside of Taiwan, it's still the main manufacturer of these too, it's not just specialized in the high-end stuff. Taiwan factores going down would cause a dramatic shortage of everything - computers, phones, cars, TVs, refrigerators, washing machines, ect.

Expand full comment

You are definitely right about most of the terminology just being branding nomenclature. That was and is a big hurdle to understanding this topic.

Expand full comment
author

There are good and there are bad non-AI-algorithms. For example, the one on which the software for autonomous operations of MIM-104 Patriots is based... sigh... was de-facto declared trash already back in 2003 (righter after it shot down two friendly jets over Iraq).

Expand full comment
Sep 11·edited Sep 11

That's unlikely to be a problem with the underlying technology, more likely good old lack of human intelligence. The organizational problems you describe with the UA military are quite similar with the organizational problems when writing software. The result is almost always decisively less than the potential. So my guess it - nobody though it a good idea to test it properly in a simulated environment before they put it in a missle. Why waste the effort of doing so if you're "certain" it works? And do you even want to know if it doesn't work? Will you get a reward for reporting problems or for introducing delays in the delivery of the final product? And it's not like you personally will fly on that plane that gets hit or if somebody will hold you accountable. It's not even your fault - it was a team of 10 people, and nobody told you to test it in that specific situation, surely it was somebody else's job to think about this.

People have been imagining how artificial intelligence will fix natural human stupidity from the dawn of the industrial revolution.

Expand full comment
Sep 10·edited Sep 10Liked by Sarcastosaurus

Regarding chip manufacturing: not realistic. To have a FAB of the required capability is such a burden, that nowadays even big manufacturers has serious troubles over it.

Russia tried it, but at the end they landed at TSMC as I recall. And at that time a FAB was all a lot cheaper still.

So, integration it is. Just don't get on the wrong end of sanctions and all fine. And if something is still not working out then look how China does it with stripping VGA cards in bulk for chips to be used in their datacenters after re-mounting them to proper PCBs.

Expand full comment
Sep 10Liked by Sarcastosaurus

Für die Erkennung reicht doch ein schon fertiges AI-Model (Visual). Es sind einige gute in Huging-Faces verfügbar. Größe 4-5GB. Sogar die Chinesen haben schon ein Model veröffentlicht. Die Sollten doch für die Erkennung der Objekte ausreichen? Oder sind diese für Militärische-Objekte sensibilisiert?

Wenn nicht dann... Für die Ausführung von dem Code reicht ja auch eine normale CPU. Dauert zwar ein Weilchen, aber das Ergebnis ist das Gleiche wie mit einer GPU. IoT gibt es in verschiedenen Ausführungen und die Language-Unterstützung auch für verschiedene Sprachen.

Bei dem Flug selbst, kann der Navigator aus einer sicheren Entfernung die Erkennung und Markierungen der Objekte mitverfolgen und in das Zielsetzen mit eingreifen. Durch die Erkennung der Entfernung können sogar in diesem Zustand die Koordinaten des Ziels an andere übergeben werden. Nach dem das Ziel durch den Navigator gesetzt wurde, kann die UAV sich automatisiert zum Objekt bewegen und dort entsprechende von dem Navigator eingestellte Aktionen ausführen.

Es gibt auch andere mögliche Szenarien. Aber, was ich mitteilen wollte, ist halt die Tatsache, das bei dem Erkennen und Navigieren zum Ziel die UAV nicht unbedingt ein entsprechenden AI-Chip benötigt.

Expand full comment
Sep 10·edited Sep 10Liked by Sarcastosaurus

> For the speeds necessary to do demanding AI computations well the chips should be 8 nanometers or smaller.

To be more precise, 8 nanometers is the size of the engraving on the chip, the chip itself is much larger. Finer engraving makes for faster chip because you can fit more circuits on the same size of chip and maybe also because the circuit are smaller signal take less time to travel from one to the other.

Edit : you do mention elsewere, this wording was just a little ambiguous, sorry for the nitpick.

Expand full comment
Sep 10Liked by Sarcastosaurus

Thanks for the update. Interesting topic in a very digitalized war.

Expand full comment

So noob question regarding trained model

So model is trained on thousands of pictures then this model is uploaded to each drone that has AI chip ?

Expand full comment

Basically yes, but there may be a number of steps in between, like reducing/simplifying the model to fit the capacity of AI chip (and thus ibcreasing rate of errors). And this can be several models, one for example responsible for targeting, and will track where exactly it needs to hit to destroy this type of target. One identifying the target in complex environment, masking, decoys, etc. One providing geo awareness, comparing pictures of the surrounding with sattelite images loaded in memory. One that executes flight control, in complex conditions, including wind, rain and winter time.

Expand full comment

If I recall correctly, the only company that manufactures the hardware necessary to create chips at that scale is in the Netherlands. How does that factor into the game? What is the lead time required to start up a green-field AI manufacturing operation?

Expand full comment

Dear Benjamin, your article is mentioned in this interview, and also a commentary from a local industry expert, see minute 5.36, auto-translate captions.

https://youtu.be/62L_E4zhSm4?feature=shared

Expand full comment

Thank you Benjamin, for this informational report

Expand full comment

1. There is no Artificial Intelligence and there can't be. Now this hyped term is stretched to absolutely everything. Here we are talking about banal pattern recognition, the algorithms of which have been known for a hundred years, and have been practically used in the military since the 60s. (automatic terrain recognition and guidance (ATRAN) for MGM1-1 Matador missiles, 1964). Naturally, it is also possible to detect a tank and even a soldier, however, for 50 years this has not been implemented in any way, although the microprocessors of the 70s already allowed such calculations. The fact is that if you completely entrust the detection of the target and the decision to attack to the machine, it is too dangerous for your own troops. It is almost impossible to exclude the error of visual identification of friend or foe. For those who like to dream about autonomous weapons, I suggest reading the science fiction story Polygon (Sever Gansovsky, 1966)

2. I hear a lot of tearful lamentations about the availability of high technology to cannibalistic regimes, but no one asks how this happened and who should be held responsible for it. Why did communist China, which does not hide its ideology at all, use Western money to obtain technology and equipment, becoming a monopolist in the production of electronics? Moreover, the West, even while declaring a fight against aggressive regimes, continues to supply high-tech products. It is hard to imagine that this was possible during the Cold War.

Expand full comment