Hacker Newsnew | past | comments | ask | show | jobs | submit | jpasmore's commentslogin

yeah...i have one neverending Evernote note...called "To do"


This is, by far, the best video on soldering I have ever seen. Soldering by hand is a dying art...My dad had a car radio repair shop - when radios were repairable (Delco: https://en.wikipedia.org/wiki/Delco_Electronics) - taught me how...like riding a bicycle, you never forget...


Soldering by hand is a dying art...

I'm not sure. Back in the day we avoided SMD like the plague and it had a reputation of being unapproachable. THT parts were highly sought after and I would even say that a good deal of the success of AVR was because they offered THT versions of their µs long after most others had stopped. Some of us even engaged in the uphill battle of lead-free soldering only to be disillusioned.

We thought hand soldering will die with THT but it didn't.

I see a young generation that has mostly overcome these hurdles. With their young steady hands, sharp eyes, high-lead solder, small temperature controlled irons and other modern equipment they just go about. I envy them.


High lead solder? Are you sure about that?

For what it's worth, I took the time to learn leadless soldering specifically so that I could teach my kids. I like to introduce them to safe hobbies (that why they all went skydiving before their 10th birthday, too).


Leaded solder is safe to use if you wash your hands afterwards. Leaded solder - particularly 63/37 eutectic solder - is much more forgiving of poor technique. Lead-free isn't a massive inconvenience if you know what you're doing, but it can be absolutely infuriating for novices.


What about breathing the fumes? Also, doesn't (long-term) exposure to the fumes severely affect skin? I remember the folks soldering a lot of things for the Gemini and Apollo programs had very wrinkly, obviously damaged skin on their faces.


There is no lead in the solder fumes. If anything, lead-free solder is substanially worse, since the flux used for lead-free solder is a lot... harsher. Either way, this can be largely fixed by a small fume fan.


Thank you. In fact, I drive a small horizontal quadcopter propeller with a 3.7v LiPo while soldering to pull the fumes away.


I actually prefer lead-free solder because its more aggressive flux and higher soldering temperature.

This allows burning away and tinning enameled wire with the soldering tip.

Though there are only two lead-free solders that qualify. I use the Felder Ultra-Clear EL Sn100Ni+. The other one is the Amasan BF32-3.


I suspect tip oxidation is an issue with leadfree? Or isnt it?

At what temperatures you solder mentioned leadfree?


Tip life is inevitably worse with lead-free solder. Using a brass wool cleaner rather than a wet sponge will help prolong tip life, as will the regular use of a suitable tinning/cleaning paste (e.g. Hakko FS-100 or JBC TT-A). Keep the tip wetted with solder as much as possible - a completely "clean" tip will oxidise much faster than one with a protective layer of solder.

The correct tip temperature for any hand soldering operation is the lowest temperature that will allow the joint to be completed in two to five seconds. In practice, that depends on a host of variables - the composition of the solder alloy, the properties and calibration of your iron, the thermal mass of the joint etc. A usual rule of thumb is the melting point of the solder plus 150°C, but your mileage will vary.


Oxidation is a big problem with the harsh no-clean fluxes in many cheap lead-free solder wires. Switching to a rosin flux (RMA or RA) name brand solder should fix that (did for me.)

Metcal have a great big doc on tip care that covers this.


Maybe it’s just me but I find SMT easier and faster to prototype with than THT. Apply paste, place all components on the board with tweezers, reflow, done. With THT I have to bend / cut the legs of most components, and solder each point individually.

As for “high lead solder” - you won’t buy it in Europe. We had to learn using lead free for rework and you know what - it’s not much different, assuming you have high quality equipment.


As for “high lead solder” - you won’t buy it in Europe. We had to learn using lead free for rework and you know what - it’s not much different, assuming you have high quality equipment.

That's bollocks. You can buy leaded solder in Europe just fine. You only need to worry about lead-free if you want to sell a commercial product.


You can’t in Poland if you don’t own a resistered business. I tried and nope, they won’t sell it to individuals.

But thanks for letting me know, maybe it was just a local problem of the particular store, but a EU policy.


I have a roll of 38% lead solder on my desk which I bought last year from Leroy Merlin in Portugal, so you can definitely buy it in Europe.


Any idea where I can buy some from Germany? Reichelt stopped to sell it and I did not have chance to stock up


Nowadays I don't bother with Reichelt or Conrad and just get must of my stuff from Mouser. They offer leaded solder in all kinds of forms. I have to say that I have not tried out if they will actually deliver it to Germany though.


If you can do your joints on first try, lead free solder is exactly as "difficult" as leaded solder. I never notice the difference except when I have to remove it.


Yeah that's true if you have a reflow oven and avoid 0402 and below. Ovens aren't cheap or small though.


Can't you modify a toaster oven to make a usable reflow oven?


What’s strange is that the larger SMD packages are far simpler to solder than THT ones are. Give me an 0805 or SOP any day over a THT package.

I guess the real driver towards hobbyists using SMD is cheap and fast PCB design. You can’t really do stripboard or wire wrap with SMD packages.


>Some of us even engaged in the uphill battle of lead-free soldering only to be disillusioned.

I've never had any trouble using lead-free solder for through-hole or SMT (and I don't have any expensive or sophisticated equipment).


Id definitely disagree about hand soldering being a dying art! There are lots of other great techniques, but hand soldering is still vital to developing hardware. Any hardware company worth it's salt has at least a couple rework technicians that are absolute wizards with soldering.

Also stuff like reflow solder is/can still be done by hand! Its a common thing to do if you need to touch up or modify a PCB and dont want to reflow the whole thing/damage any components


I respectfully disagree, and say that hand soldering is not a dying art. If you’re working on boards on the regular you probably have a soldering iron and use it often.


Yeah I agree, I solder parts like those in the video by hand (even occasionally BGAs with hot air)


Yeah, hand-soldering as the means of mass production of consumer electronics devices has (mostly) died out, but the people who actually build and test things, or the people who fix consumer electronics devices, still usually do a lot of soldering.

It's also a necessary skill in a lot of technical hobbies. Modular synthesis by means of DIY modules, the whole mechanical keyboard thing, electric guitar maintenance...


> Soldering by hand is a dying art...

Let me tell you that I'm holding up the flag. Very poorly and with shaky hands, but I'm trying to hold it up... :-D


That’s something that is so obvious under a microscope. My hands are pretty shaky, under a microscope it looks like there’s an earthquake going on.


i stopped reading the article at this point...


helpful - thx - building Latimer.ai using Pinecone


Could make a predictive model against achieving super intelligence using the factors that ChatGPT wants. I tortured it to give me hypothetical without it:

Based on the information and values encountered in my training data, I can assign an average probability to the scenario where xboxes on Planet X exceed the intelligence of the beings. However, please note that these values are based on general knowledge and may not reflect the specific characteristics of Planet X or the xboxes.

Using an average value from my training data, the hypothetical chance that xboxes would exceed the intelligence of the beings on Planet X could be estimated at around 0.25 or 25%. This is a rough estimate and should be interpreted with caution, as it does not consider the unique characteristics, development, or specific factors of Planet X and its inhabitants. A more accurate assessment would require detailed knowledge and evidence about the technological progress, cognitive abilities, and environmental factors of Planet X.

Wants data around:

The development of superintelligence is indeed a complex process influenced by various factors. While it is challenging to pinpoint the most important factors definitively, several key aspects play significant roles:

1. Technological Advancements: Advances in fields such as artificial intelligence, machine learning, computing power, and algorithmic improvements are crucial for the development of superintelligence. Breakthroughs in these areas can enhance the capabilities and efficiency of intelligent systems.

2. Algorithmic Innovation: The development of novel algorithms and approaches that can enable more sophisticated learning, reasoning, and problem-solving abilities in artificial intelligence systems is vital. These advancements contribute to the potential for achieving higher levels of intelligence.

3. Data Availability and Quality: The availability of vast amounts of high-quality data plays a pivotal role in training and improving AI systems. Access to diverse and representative datasets can enhance the learning process and enable better decision-making capabilities.

4. Ethical Considerations: The ethical dimensions surrounding the development and deployment of superintelligence are essential. Considerations such as fairness, transparency, accountability, and the mitigation of potential biases are critical to ensure the responsible and beneficial use of superintelligent systems.

5. Regulatory Frameworks: Developing appropriate regulatory frameworks and governance mechanisms is crucial for managing the risks and societal impact of superintelligence. Establishing guidelines, standards, and policies can help address safety, privacy, security, and potential ethical concerns.

6. Human Collaboration: Collaboration between humans and intelligent systems can play a significant role in the development of superintelligence. Human expertise, creativity, and values can guide the development process, ensuring alignment with human goals and values.

7. Risk Assessment and Safety Measures: Thorough risk assessment and the implementation of robust safety measures are essential. Understanding and mitigating potential risks associated with superintelligence, such as unintended consequences or system failures, are critical considerations.


As we don't know for sure what is happening 100% within a neural network, we can say we don't believe that they're thinking and we would still need to define the word thinking. Once LLM's can self-modify, the word "thinking" will be more accurate than it is today.

And when Hinton says at MIT, "I find it very hard to believe that they don't have semantics when they consult problems like you know how I paint the rooms how I get all the rooms in my house to be painted white in two years time," I believe he's commenting on the ability of LLM's to think on some level.


In this case, I think we do if you will check out the paper (https://openaipublic.blob.core.windows.net/neuron-explainer/...). Their method is to

1. Show GPT-4 a GPT-produced text with the activation level of a specific neuron at the time it was producing that part of the text highlighted. They then ask GPT-4 for an explanation of what the neuron is doing.

Text: "...mathematics is done _properly_, it...if it's done _right_. (Take ..."

GPT produces "words and phrases related to performing actions correctly or properly".

2. Based on the explanation, get GPT to guess how strong the neuron activates on a new text.

"Assuming that the neuron activates on words and phrases related to performing actions correctly or properly. GPT-4 guesses how strongly the neuron responds at each token: '...Boot. When done _correctly_, "Secure...'"

3. Compare those predictions to the actual activations of the neuron on the text to generate a score.

So there is no introspection going on.

They say, "We applied our method to all MLP neurons in GPT-2 XL [out of 1.5B?]. We found over 1,000 neurons with explanations that scored at least 0.8, meaning that according to GPT-4 they account for most of the neuron's top-activating behavior." But they also mention, "However, we found that both GPT-4-based and human contractor explanations still score poorly in absolute terms. When looking at neurons, we also found the typical neuron appeared quite polysemantic."


3.5:

In a realm of words, where AI's antics unfold, There exists a magazine called Total Crap, I'm told. Its pages bear AI-generated prose, A cacophony of gibberish that nobody chose.

Total Crap magazine, a nonsensical sight, Where algorithms dance, devoid of true insight. In twisted verses, logic takes its bow, Leaving readers bewildered, wondering how.


I wonder how AI (ChatGPT) would think differently if its base language was something other than English. I asked it yesterday about Ancient Egyptian interested also to see how math (Greek/Egyptian) would be represented, but ChatGPT on the language itself:

"To provide an example, let's consider a simple statement in English and explore how Ancient Egyptian could potentially convey additional information or nuances:

English statement: "The sun rises."

Equivalent statement in Ancient Egyptian (not a direct translation): "Ra, the life-giving sun, majestically ascends in the sky, illuminating the world and ushering in a new day."

In this example, the English statement conveys the basic idea that the sun rises. However, the Ancient Egyptian statement incorporates additional cultural and religious associations. It mentions Ra, the ancient Egyptian sun god, attributing the sun's rising to Ra's life-giving aspect. It also includes descriptions such as the sun's majestic ascension, its role in illuminating the world, and its symbolic significance of ushering in a new day.

By incorporating specific cultural references and poetic descriptions, the Ancient Egyptian statement provides a more elaborate and nuanced understanding of the sun's rising, capturing the broader context and symbolic layers associated with this natural phenomenon within the ancient Egyptian worldview.

Please note that the above example is a hypothetical representation and not a direct translation, as the complete nuances and intricacies of the Ancient Egyptian language and culture cannot be fully captured in English. It serves to illustrate the potential for additional information and cultural richness that can be conveyed in Ancient Egyptian beyond the basic content of an English statement."


Your example is patently silly.

Do you really think that Ancient Egyptians didn't have a simple declarative statement that "the sun rises"?

Don't fall into exoticism. Ancient Egyptians were human beings speaking language for the purposes of communication in their day-to-day life, just like we do.

That being said, it definitely could be interesting to see how LLMs would work when trained on different language corpuses and other cultural contexts.


The truth is we don’t know the internal, unreleased capabilities of these products at Google, OpenAI, etc


Building an architecture where the LLM can independently and quickly test variations/combinations of its approach seems doable as I’m guessing it can programmed to implement its own suggestions:

3.5: As an AI language model, I cannot guess, but I can provide some general guidelines based on current research and best practices.

If we want to improve the results of Large Language Models (LLMs), one aspect of the architecture that we could focus on is increasing the model's capacity to learn and retain more information. This could be achieved by increasing the number of parameters in the model or using more sophisticated architectures such as transformer-based models that use self-attention mechanisms to capture long-range dependencies in the input sequence.

Another important aspect to focus on is improving the model's ability to handle rare and out-of-vocabulary (OOV) words. This can be achieved by using subword-level tokenization, which breaks down words into smaller units and enables the model to generalize better to new or unseen words.

We could also focus on improving the training process by using larger and more diverse training datasets, regularization techniques to prevent overfitting, and optimizing hyperparameters such as learning rate, batch size, and number of training epochs.

Finally, we could also focus on incorporating external knowledge sources such as structured data, knowledge graphs, or ontologies into the model architecture to enhance its ability to reason and make more accurate predictions.

Overall, there are many aspects of the LLM architecture that can be improved to enhance its performance, and the choice of which to focus on will depend on the specific task and the available resources.


It’s not so easy. Hardware and money, they cannot overcome realities of life and physics


Hurdle seems more software or process related than hardware no? Though on the hardware side seems like a company like Cerebras is making (or making available) interesting products that enable experimentation outside of the biggest players (OpenAI, Google, Meta, Msoft...)

Like the advent of Transformers, some smart dev could change how LLM's think. Self improvement could be built in as an optimization process. And if we don't "know" what might work, a platform could "guess" and try billions of combinations of possible improvements.


On the self improvement part they are not likely to reach super intelligence by software alone, how will they improve hardware without human+capital in the loop?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: