RESEARCH IN DATABASE MANAGEMENT SYSTEMS,
VLSI DESIGN AND NEURAL NETWORKS AT IDAHO STATE UNIVERSITY
Computer Science Program
College of Engineering, ISU Box 8060
Tel. (USA) 2082823405
Pocatello, Idaho 83209, U.S.A.
as performed by Prof. Vitit Kantabutra, senior member, IEEE
NEW! See my Academia.edu page here.
Database Management
Systems Research
The Relational database
model is the dominant data model for DBMSs today. However,
Relational DBMSs has
the problem of data redundancy, causing great loss of efficiency and
possible data inconsistency. Additionally, it is much more
natural to design a database using the E/R (EntityRelationship) model
than using the Relational model. Network DBMSs
followed the
E/R model, but unfortunately it was not supported by a declarative
language, and it used the data storage locations as keys. Our new
type of DBMS, called IntentionallyLinked Entities (ILE), uses
references to represent relationships, and also permits support by a
declarative languages. Prototype implementation is still under
way.
See this
reference.
*NEW* See Patent application for
the latest exposition of the ILE database system.
VLSI Research
Prof. Vitit Kantabutra of ISU's College of Engineering has
published results in 2 major subareas of VLSI design:
Computer Arithmetic Circuits
 New
HighSpeed Adders Using CarryStrength Signals This paper presents
several new highperformance, lowcost carryskip adders, which
involves a bit block structure that computes propagate signals called
"carry strength" in a ripple fashion. A 32bit adder designed as
described here and realized using 0.6um CMOS technology shows a
performance gain of more than 30% with respect to a conventional
carryskip adder, and reaches a performance comparable with that of a
traditional blockCLA saving 41% silicon area and 42% power. (This and
related results have been presented at SSGRR
2000 in l'Aquila, Italy. Additional adder
designs were presented at the first Online Symposium for Electronics
Engineers (www.osee.net)
and archived at the same site.
 A New Radix4
CORDIC Algorithm for Computing Sine and Cosine Unlike
existing algorithms, my new algorithm does not require extra iterations
for convergence. Nor does it involve a nonconstant multiplicative
factor. For highprecision implementations, all it requires is a rare
trap to a longer iteration. Contact me by email for more information. *U.S.
PATENT ALLOWED; APPROX. ISSUE DATE  END OF JAN. 2002*
 A New Breed
of
Division Hardware  A Possible Cure for PentiumStyle Division Bugs.
 UltraFast
CMOS Adders  Seemingly the Fastest Adders in CMOS Technology
*PATENTED ( U.S. PATENT
NO. 5,508,952, issued April 16, 1996)*
 Optimum
CarrySkip Adders  Fast and Small, These Adders are Suitable for
Highly Parallel or LowCost Applications.
 New,
Very Fast Variant of CORDIC Hardware  Uses quick approximation via
fast, lowprecision arithmetic elements, yet attains full precision via
quick, occasional correction steps. *PATENTED: U.S. PATENT No.
6055553, issued April 25, 2000.*
LowPower VLSI
 LowPower,
RaceFree Asynchronous Circuits  Designed using an improved
version
of one of Huffman's classic state assignment algorithms to obtain
circuits that use less energy due to reduced switching
activities. This result has been published in the IEEE
Transactions on Computers, and also republished in LowVoltage/Low_Power
Integrated Circuits and Systems, edited by E. SanchezSinencio and
A. G. Andreou, IEEE Press, 1999.
 Complex Gates are
better than Standard Cells  A small study that shows why complex
gates are more suitable than the much more popular standard cell
technology for lowpower VLSI design. This result has also been
published in LowVoltage/Low_Power Integrated Circuits and Systems,
edited
by E. SanchezSinencio and A. G. Andreou, IEEE Press, 1999.
Neural Networks
Research
 Observed that traditional backpropagion's weights appear to
travel in an orderly fashion when directions are averaged. Came
up with a simple algorithm that relies on average directions of several
backprop iterations that also restarts automatically when there is
little promise for convergence. This algorithm is nicknamed "Hairpin"
because it was observed that the the average weight trajectory is often
straight or mildly curved, with occasional hairpin turns. Results for
10 random test runs are as shown in the table. After a few dozen
runs the new algorithm converges very reliably. Plans are
underway to test the algorithm on character recognition.
 The project just mention is now finished and accepted for
publication. Test results were excellent. See http://www.webs.uidaho.edu/epscor/Success/neural_net.htm

Hairpin Alg. on XOR network with 2level, 3neuron network.
Tabulating convergence time (sec.) vs. trad backprop, lambda (sigmoid
steepness)=9, eta (learning rate)=1, HP Cel 1.3 GHz machine at home. 

trad grad descent 
new alg, with restart 

Conv. time (sec.) 
Conv. time (sec.) 

18.94 
2.25 

no conv 
2.39 

2.28 
1.67 

1 
1.01 

2.12 
1.96 

2.29 
1.07 

1.62 
1.2 

no conv 
3.11 

1.61 
1.93 

21.28 
5.4 



avg 
6.39 
2.20 
stdev 
8.50 
1.30 

 In a slightly earlier work, we present a new type of error
backpropagation gradient descent algorithm. In the new algorithm,
unless the error is already very small, we move along quickly (“glide”)
in flat regions. This algorithm seems intuitively appealing
because flat regions should be “safe” regions where the error doesn’t
usually change much over distance. Using a simple 2layer, 3neuron
neural network that computes the XOR function as a test bed, we find
that for small to moderate learning rates, this algorithm converges
significantly faster than conventional backpropagation with the same
learning rate outside of flat regions. For example, at eta=0.5
the new algorithm converges about 3 times as fast as the conventional
one. However, the new algorithm is riskier than the conventional
one and tends to diverge at higher learning rates. While the new
algorithm already has some practical value, it could be even more
useful
if the divergence problem can be solved. Some ideas that may lead
to its solution will be given at the end of the paper. (Later on we
think that these "hairpins" described in the bulleted item above is
probably what causes this earlier algorithm not to converge.) This work
represents an early attempt to conquer the vast flat regions in an
error curve, turning the known properties of flat regions to our
advantage. The paper has been accepted in a special session on
neural networks at IECON '02 in Sevilla, Spain. The paper,
authored by Vitit Kantabutra and Elena Zheleva (student) is entitled,
"Gradient Descent with Fast Gliding over Flat Regions: A First Report."
For inquiries please contact Prof. Vitit Kantabutra at vkantabu@computer.org , by
phone at (208) 2823405 (USA) or by regular postal service at:
Prof. Vitit Kantabutra
College of Engineering
Campus Box 8060
Idaho State University
Pocatello, Idaho 83209
U.S.A.
Prof. Kantabutra also has a personal interest in photography.
You may view some of his photos of Pocatello (also try
this
link),
and also a photo of the first tier of Erawan
Falls, a famous 7tiered waterfall in Kanchanaburi, Thailand.
The Erawan Falls photo is a finalist in the EarthImage 2000 contest.
A charismatic Pocatello Moose 
Autumn from Hootowl Road near Pocatello 
Baby Red Squirrel, Buckskin Area,
Pocatello

All photos (c) Dr. Vitit Kantabutra, Pocatello.
Current Classes (Fall 2004)
EECS 374
CS 282
CS 385