|
Hi!
Jul 11, 2006 21:02:08 GMT 10
Post by donburch on Jul 11, 2006 21:02:08 GMT 10
Just found this website and joined.
I have 25 years programming, but a few months back I decided to learn some electronics and mtalworking and have a go at robotics as a hobby.
I have been amazed that there seem to be so few hobby robotics hobbyists in Aus ! especially given that every university and lots of schools seem to be into it now.
I am in Chatswood (Sydney north shore), and have also been wondering if there would be enough interest to start a club here. I guess in the mean-time this website and forum is a good place to meet.
|
|
|
Post by donburch on Sept 3, 2006 22:11:51 GMT 10
|
|
|
Post by donburch on Aug 29, 2006 21:50:47 GMT 10
Hi Bones, these look terriffic, especially wireles for autonomous bot operation !
Are you still keen on them, or have you found some disadvantage or better product ?
|
|
|
Post by donburch on Aug 29, 2006 21:37:04 GMT 10
I came across Waysmall computers - for a complete linux system that fits in your hand. Or a bare motherboard only 80mmx20mm containing 400MHz Xscale Risc CPU, 16MB flash, 1 MMC/SD slot, Bluetooth, and 60-pin expansion connector for US$169.
|
|
|
Post by donburch on Dec 7, 2007 21:53:52 GMT 10
Sounds as though you didn't enjoy the travelling medicine show.
|
|
|
Post by donburch on Nov 3, 2007 10:45:28 GMT 10
My wife assumed that I would want to go ... but having seen some videos and knowing that it's just a carefully rehearsed show to make asimo appear to have some intelligence; I wasn't going to bother.
On second thoughts maybe I'll take my son, and his friend who is into robots.
|
|
|
Post by donburch on Nov 3, 2007 11:57:48 GMT 10
Hi Sandgroper,
I also have gone the BS2 route into robotics, and have also realised that my programs use only a small fraction of the available memory.
|
|
|
Post by donburch on Apr 15, 2007 20:54:40 GMT 10
It's looking as though AVR32 is the way for me to go ... a good amount of 32-bit CPU grunt, and good programmability with public domain tools, libraries, and even a choice of (linux) OSes - and so affordable !
However I'm not in a rush to get one of the first off the production line ... as Botman suggested, there's plenty I can do first with my PC, a USB webcam and RoboRealm software. Then add a servo controller and a couple of servos for pan and tilt. And THEN start thinking about integrating it into a mobile platform. In the mean-time, there's plenty of reading for me to catch up on ;-)
Cheers,
|
|
|
Post by donburch on May 19, 2007 9:34:54 GMT 10
I was just looking at a couple of Machine Vision articles (from Robots.net). I guess you hardware types would be more interested in the CCD Imager project which is mostly hardware because the ATMega32 is nowhere fast enough to drive the camera. I liked the CMUcam3 which adds an ARM7TDMI for processing the image at US$240 and uses open Source software. The technical report from the CMU Robotics Institute titled CMUcam3: An Open Programmable Embedded Vision Sensor has a reasonable amount of detail on the hardware setup. However the small on-board memory creates limitations which I consider significant (at least for the sort of things I would like to do eventually).
|
|
|
Post by donburch on May 12, 2007 22:43:35 GMT 10
I heard a joke the other day that is about a boss and his worker where the punch line ends something like: "Worker: I have to work 4 days, because I can't afford more than 3 days off on what you pay me." If you update that joke to include John Howard it'll become "I have to work 7 days because I can't afford any days off on what you pay me". For some absurd reason they have refused to look at what happened in New Zealand with similar IR laws. Probably because they are self-serving liars in the pocket of big business. Unfortunately we all get what the lowest common denominator deserve
|
|
|
Post by donburch on May 4, 2007 23:23:51 GMT 10
I would use a CD4066 to switch between one video source and the other. The ADC1175 is a relatively expensive surface-mount part. The CD4066 is cheap and has legs. The uC uses simple on/off to switch through one channel or the other. There you go again diving in to specific chips when I'm still trying to get my head around the concepts. I don't want to get pedantic here, but years ago I was thinking of using a cheap AV switch for video editing (before I had a computer that was halfway capable), and someone pointed out that the two input sources are not co-ordinated. When you turn the knob the TV then has to re-synchronise to the start of frame from the other source, resulting in a burst of static in the output whenever you switch. However the expensive AV switch buffers both input sources, so that the output simply reads from the other buffer at start of the next frame. So.... I think that just arbitrarily switching between inputs may not turn out to be the best solution. Even if you detect the start of a frame and switch then, your decoder will receive a part frame which it will have to recognise and discard. There's a helluva lot of what the human brain does at lower levels that we really don't understand. I was impressed by the way the human brain automagically compensates for defects in the design of the eye (in "Robot" by Rodney A Brooks). Certainly ASIMO is an example of simulated human bipedal movement, and NOT an example of Artificial Intelligence ! The BIG question is, how to make a robot learn something that isn't within its pre-programming - can a PLL radio receiver "learn" any other task than locking onto a radio signal ... the method of which was actually pre-programmed. I am increasingly aware of how much our subconscious does for us ... we just think something (like recalling a humorous moment) and our subconscious automagically implements the bodily reaction (in this case a big grin ... which the female stranger on the train misinterprets and slap ). As babies we do learn how to see, to talk, to think - but as adults we no longer know how we did it. On the other hand I feel that robotics researchers are obsessed with making machines in our own image. Are two legs really the best way to move around ? Just because dogs and spiders aren't the dominant species does not mean that 4 or 8 legs are not effective means of locomotion. For that matter human brains don't work in binary - yet that hasn't stopped researchers making very good use of the computers. I'm interested in how my robot can achieve its objectives, and not how a human might do it. My robot uses wheels because it seems most efficient for a machine. I want to use sensors and cameras efficiently, and not be constrained by how humans work. Also I think that we have got hung-up on the way we use computers. Sure, computer software is constantly improving - but they are small incremental steps compared with the leaps of prior decades, or the rate of hardware improvements. We are stuck in a rut and we keep trying to make computers a copy of human thinking - ignoring the fact that a computer is organised differently ... but what the new paradigm is, I cannot guess. Since my current pay rate is low I've decided to only work 3 days ... but somehow I still don't seem to achieve more in other areas of my life, such as spending time with my 12-year-old. You're right there, Rod ;D But I happened to get something for work in Jaycar last week and picked up some 1" spacers and 3mm bolts to fit, so now Kibo is in one piece. Hooray Seems a fair price for your time
|
|
|
Post by donburch on Apr 30, 2007 8:50:44 GMT 10
Oh, I'm interested in machine vision alright ... I was mentioning my plans I had then for vision. The idea was to frame-slice (multiplex) two CMOS camera signals thru a single ADC1175 (ana-to-dig) chip and process each frame using the ATMEGA128 (on a JED Micro AVR570 board). Feature extraction (undecided which features at the time) and stereo depth were to be the outputs in a "high-level" coded format for later stage processing. This would be streamed via serial comms. I continue to be amazed at how quickly what you hardware guys say goes over my head ;D Basically you're thinking of feeding two cameras into one input stream to the ADC. Sounds to me like there will be issues synchronising the frames, and then for the ATMEGA to know whether it is looking at the Left or Right image - since it will still need to store both Left and Right images simultaneously for the depth perception comparisons. Being a software person, I would be tempted to use 2 ADCs instead, because I'm sure my time to design/ build/ debug the frame slicing would far exceed the cost of a second ADC. To be honest, the feature extraction and subsequent depth calculation gives me the willies 'cause I'm afraid that I won't be able to pick up the math. That's where I see RoboRealm allowing me to easily experiment with various filters. Once I know what filters and algorithms to use it should then be easier to whip up a control program, link in the appropriate subroutines from OpenCV (or similar), compile and download to the bot. I really don't like the idea of doing all the experimenting on the bot with a long program/ compile/ download/ test cycle each iteration, and with minimal debugging tools in the limited bot environment. But I did get to play with the RoboRealm software the weekend before Easter. To be fair, I couldn't get anything useful out of it that I thought would lead to a machine being able to do any hand/eye coordination stuff at all. The "blobs" feature didn't work - why are there so many, and why are they all at the bottom of the screen? The ability to put boxes around things just created a zillion little boxes or one big box covering everything. And the edge/outline stuff might be useful except that I want it boxed too, but that can't be done. I played with all manner of color bit depth, grayscale (and bit depth), bright/sunlight room lighting, dark lit room, you name it. I was not satisfied. I hope you other guys have more success. And just how fast does the PC need to be? Is 2 GHz not fast enough? I found that more than three functions creates a really slow changing output. I haven't really tried RoboRealm yet, though the first few tutorials on their website (line following and object tracking) make it look very simple - as I'm sure it is when you already know which filters to combine Their website's FAQ 12. What are the minimum requirements to run RoboRealm? recommends "CPU - As fast as possible - 2GigHz plus" with the comment that "it is very easy to quickly add a couple filters to reduce the frame rate to slower than usable". They point out that RoboRealm runs on their BucketBot robot which is a 386, but they had to reduce the resolution to 80x60 and accept about 10fps. But, if you think you're not putting enough time into it, then maybe you're not putting enough time into it!!! ;D Y'know, I just feel that I'm not putting enough time into my robot. The base has been sitting there (well, OK now it's sitting here on my desk upstairs) for a good few months now and waiting for what ? Waiting because the spacers I picked up at Dick Smith have tiny screws which aren't long enough to go through the acrylic of Kibo's base ! Waiting three months or so for me to get around to organising some other spacers to mount the BoE controller on the acrylic. How pathetic is that But alas that's not all. I have had my BoE-bot manual open on my desk at "Chapter 6: Light Sensitive Navigation with Photoresistors" ready ... but I haven't started the exercises. In three months I have decided to use Ultrasonics (rather than whiskers or infra-red) for collision detection, and have started to compare prices. Sure I have been upgrading and fixing PCs (now 7 for us 3 people ); and I tell myself that I don't want to start working on something robotics in the late evening when my mind and body is already tired. So instead I busy myself with surfing the web, collecting information "to look at later". Today I went down to Bunnings and bought a packet of 3/16" x 50mm bolts and nuts, and that job should be finished by the time I finish this damn ! 3/16 is too big to go through the mounting holes on the BoE board Maybe I'll use some wooden dowel ;D
|
|
|
Post by donburch on Apr 16, 2007 21:27:42 GMT 10
Hi Botman, ... the RoboRealm software will accept a standard USB camera so there isn't any need to go and purchase a video capture card just to start basic testing. Sorry I haven't been around much, or I would have pointed this out. Makes for an easy test/development rig ... even if it's only used to work out which of the OpenCV filters/algorithms to compile into your on-board controller. ... Actually I don't remember your expressing any interest in machine vision when we met. Is all this for my benefit, or for yours ? Have you decided that I'm all talk and no action, so you'll show me up ? ;D If so .... well, thanks
|
|
|
Post by donburch on Mar 25, 2007 16:25:55 GMT 10
Thanks guys, you have (indirectly) answered my question ... that you haven't had first-hand experience with BrainStem. Thanks also for your impressions ... like you, I think it sounds interesting, especially the idea of creating "configurable reflexive behaviors" for lower-level tasks. I get the impression that BrainStem isn't desighed for high-level tasks (like machine vision), though I suspect it would also be good for mid-level jobs due to its C and Java programming. Erm. Sorry Don. I wasn't trying to pick on you. Shoot! I feel guilty. No reason to feel guilty, I took it in fun ! Mind you I've been rather stressed at work lately and depressed, so maybe my reply sounds more serious than I intended. As always, you're right on target, Botman ! I have been intending to use RoboRealm, if only to work out which techniques to use in the 'bot. However you make it sound very cheap to get vision working on the PC (even just as a demo). My ideal would be if Evolution would still sell ER1 for US$299 ... but now the price is $25000 (for ERSP softwasre, with a ER1 thrown in as a free extra). A laptop shoild be plenty powerful, high level programmable, and a reasonable weight and size to build into a robot. All it needs is a sturdy lightweight base with a couple of motors and ability to add USB camera and a few sensors by USB or serial. But even without ER1 base, a pan/tilt camera would be a good addition to a PC to get familiar with the hardware and software. If only it were that simple
|
|
|
Post by donburch on Mar 25, 2007 8:49:08 GMT 10
You're trying to cheat Don!!!! You must learn assembler and become one of the initiated!!! Does Digital PDP-8 Assember count ? Mind you that was back in '75 while I was at High School in NZ ... Then some Motorolla 6800 Assembler in a night class (as a diversion from commercial applications programming during the day). And I did write one "program" for Zilog Z80 running CP/M ... The TRS-80 Mark II would scroll the screen up OK, but I wanted to scroll part of the screen down as well - turned out to be a memory-mapped screen, so only took 4 instrunctions and 3 of those were loading the registers ! Since then I've found that high level languages achieve far more for the same amount of effort, especially when using existing libraries ... I accept that controlling servos etc. is best done at low level, but somewhere between there and machine vision there must be a change of toolset. And coming from a software background, I feel more confident using a higher-level language. This Stamp BASIC is soooo old-fashioned ! Already got several ... Flat or Philips ? What size ? I may be wrong here, but doesn't this kinda assume using PIC controllers ? Surely not !
|
|