1. Re: Pixel Bot Wars -Reply

I took a look at your web page.  Nice job getting the info up so fast.

Some points for all interested parties to consider.  We have two different
ideas competing for realization here.  First of all, Irv's original problem
simply asked, who can use the least "real" processing time getting their
pixel from start to finish.  Now, with the additional suggestions and
hoopla, the current problem is that of creating a complete maze-racing
environment where "real" processing time is no longer a consideration.  I
personally am much more interested in simulating the bots themselves,
but in order to make the game a fair one, we need to be able to partition
the "virtual" processors of the robots, in order to achieve equity.  My
original problem (and it still stands) is that my super-light no-weapons
wall-following bot that takes X "real" processing time to calculate each
new move could conceivably be beaten to the finish line by a hulking
behemoth that takes 100X "real" processing time to calculate each move,
when in "real" life, my speedy little critter should be zipping around the
monster.  In real life, the original room-analyzing robots simply couldn't
move - there was so much information to process, while little
obstacle-avoidance robots could zip around (although without
goal-oriented behavior).

So, after all that, I have some suggestions.  First of all, I like the idea that
the code is protected, and that a single call to the bot will produce simply
a sequence.  Of course, this sequence should include facing, position, if
a weapon was fired and in what direction, if a mine was laid, if the bot is
destroyed, and maybe more information, basically a status string
describing the robot's current state.
Next, in order to slice up the processing of each robot, each function that
a robot can perform needs to be weighted with a virtual clock cycle.  For
instance, sensing (no matter how far away) in one direction (i.e. a line)
would take one clock cycle.  Moving (whatever distance the robot is
allowed to move based on sensor arrays, weapons, and armor
selected) would take one clock cycle.  Firing a weapon would take one
clock cycle.  Turning (whatever amount is allowed based on equipment
selected) would take one clock cycle.

Now, creating the robot would be a matter of balancing sensors,
weapons, and armor against the resulting movement/turning ability, and
then coding the actual robotic logic.

I would suggest that sensors weigh one weight unit for each pixel of
distance they can sense, that weapons weigh two weight units plus one
unit for each additional damage plus one unit for each additional pixel of
range, and that armor weighs one weight unit for each defensive points
(DP).

SO, a PixelBot with forward, left, and right sensors each of distance 5
pixels, a forward-firing weapon of range 5 and damage 10, and 20 DPs
of armor would weigh:
sensors: 5+5+5=15
weapons: 2+4+9=15
armor: 20
50 weight units.

Then, assign a movement scale:  i.e. number of pixels traveled = 10 -
floor(weight/10)  so the above robot could move 5 pixels per move.
Also, each robot should be able to move forward or backward as
desired.  (Simulating treaded robots that cannot move while turning)

And we need a turning scale, so maximum degrees turned = number of
pixels traveled / 10 * 360 mapped into multiples of 45 degrees.  so
therefore the above robot could turn a maximum of 180 degrees.  This
function would need to be worked on - it's not a complete solution.

Of course, this is making the robot template (that we must provide) fairly
complex. Our robot brain has to decide *what* action to perform (turn,
move, fire, or sense) at each call.  SO we are going to basically have to
implement some EZ state machine representation for our robot logic.

So assuming that reading all of our sensors takes one clock cycle, the
information has to be put in a sensor register (sequence sr) for reading
during the next processing cycle.  So just throwing some pseudocode
down off the top of my head based on the above example robot:

if state=SENSE  then
  sense()
  state=ANALYZE
else if state=ANALYZE then
  if sr[2] = 1 -- forward sensor triggered
    fire()
    state=EVADE
  elsif sr[1] = 1 -- left sensor triggered
    move_forward()
    state=SENSE
  elsif sr[3] = 1 -- right sensor triggered
    move_forward()
    state=EVADE2
  end if
elsif state=EVADE then
  move_back()
  state=EVADE2
elsif state=EVADE2 then
  turn_right(45) -- assuming max turn is greater than 45
  state = EVADE3
elsif state=EVADE3 then
  move_forward()
  state=STABLE
elsif mode=STABLE then
  move_forward()
  state=SENSE
end if

Granted, this is pretty cheesy and needs to be worked on quite a bit.  But
I feel that in order to level the playing field we have to have a basic set of
utilities that the robots can utilize, and build from there.

I am expecting some comments, so please don't disappoint me =^)

-Jay

new topic     » topic index » view message » categorize

Search



Quick Links

User menu

Not signed in.

Misc Menu