0%

I ran across this document page of pytransform3d, and it claims:

There are two different quaternion conventions: Hamilton’s convention defines ijk = -1 and the JPL convention (from NASA’s Jet Propulsion Laboratory, JPL) defines ijk = 1. We use Hamilton’s convention.

It’s not new to know about different definitions (mostly the sequency differs), but what is this ijk=1 definition? First time to hear about.

Then I continue diving into the reference source it provided.

Only after this, I found that the problem is not only about the sequence of the components, but about something more fundamental. So I put down this summary for my future reference.

(q0,q1,q2,q3)(q_0, q_1, q_2, q_3) or (q1,q2,q3,q4)(q_1, q_2, q_3, q_4) ?

The answer is it doesn’t matter that much. This is not a mathematical or fundamental difference.

Equations can be easily converted. Codes can be easily modified.

ij=kij=k or ij=kij=-k

This is about math!

  1. Harold L. Hallock, Gary Welter, David G. Simpson, and Christopher Rouff, ACS without an attitude, London: Springer, 2017.
  • (p.16) Alternatively, one could follow a different convention with quaternion multiplication. Many authors prefer a convention that, although not expressed as such, essentially redefines Hamilton’s hyper-complex commutation relations (Eq. 1.5b above) into ij=k,kj=i,ki=ji j = −k, k j = −i, ki = −j

The quaternion representation is one of the best characterizations, and this chapter will focus on this representation. The presentation in this chapter follows the style of [99, 205, 219].

Which one is used in references?

Will keep updating as I read more references…

Using ij=kij=k and (q0,q1,q2,q3)(q_0, q_1, q_2, q_3)

  1. Yaguang Yang, Spacecraft Modeling, Attitude Determination, and Control Quaternion-based Approach, Boca Raton, FL : CRC Press, 2019. | “A science publishers book.”: CRC Press, 2019. [Link].

Using ij=kij=k and (q1,q2,q3,q4)(q_1, q_2, q_3, q_4)

  1. Harold L. Hallock, Gary Welter, David G. Simpson, and Christopher Rouff, ACS without an attitude, London: Springer, 2017.

Using ij=kij=-k and (q1,q2,q3,q4)(q_1, q_2, q_3, q_4)

还是没有搞明白为什么这就相当于重新定义了 ij=kij=-k

  1. F. Landis Markley, and John L. Crassidis, Fundamentals of Spacecraft Attitude Determination and Control, New York, NY: Springer New York, 2014.

  2. Malcolm D. Shuster, “The nature of the quaternion”, The Journal of the Astronautical Sciences, vol. 56, Sep. 2008, pp. 359–373.

  3. Hanspeter Schaub, and John L. Junkins, Analytical Mechanics of Space Systems (Second Edition), Reston, VA: American Institute of Aeronautics and Astronautics, 2009.
    (p.107) 似乎是默认了与 Rotation matrix 顺序一致的一种,即 ij=kij=-k

Change Content root at Project Structure, so that I have the same pwd when run and execute selection in console.

  • 不然的话,两者的pwd有可能不同

Keras is already part of TensorFlow, so, use from tensorflow.keras import ***, not from keras import ***.

TensorFlow backend

EarlyStopping

model.fit(..., callbacks=[EarlyStopping(monitor='val_loss', patience=5, verbose=1, mode='min', restore_best_weights=True)], ...)

Reproducibility of results (70% sure as of 2020/05/04)

TL;DR
Set all random seeds
Use tensorflow.keras instead standalone keras
Use model.predict_on_batch(x).numpy() for predicting speed.

Put this at the very beginning should work.

1
2
3
4
5
6
7
import os, random
import numpy as np
import tensorflow as tf
random.seed(42) # python random seed
np.random.seed(42) # numpy random seed
tf.random.set_seed(42) # tensorflow randome seed
os.environ['TF_DETERMINISTIC_OPS'] = '1' # ensure GPU reproducibility

Update all codes to tf.keras SEEMS solved the reproducibility problem.

BUT, the speed is 10x slower than using keras directly. After some digging, I find a workaround:

  • Use model.predict_on_batch(x) to do sequential predictions.
    • Because model.predict() will trigger the same calculation path as in model.fit(), including gradient computation or something I don’t understand. See here for details.
    • Also, use model(x) for predicting seems speed up a lot.
    • Using model.compile(..., experimental_run_tf_function=False) seems also speed up a lot.
  • This will cause another problem, the returned value should be a ndarray, but somehow I got a tftensor. So, I need to use model.predict_on_batch(x).numpy() to get the ndarray from the tftensor explicitly.
    • I guess this is a bug and would be fixed in the future, because the docs say predict_on_batch() always returns a numpy.

predict() v.s. predict_on_batch():

  • predict() is used for training
  • predict_on_batch() is used for pure predicting
  • They have a huge speed difference on small testing data. Guess I would never understand the background causes.

I use CNN for time series prediction, not for image works.

  • How to Develop 1D Convolutional Neural Network Models for Human Activity Recognition
    • time series classification
    • two 1D CNN layers, followed by a dropout layer for regularization, then a pooling layer. 为什么这样?
      • It is common to define CNN layers in groups of two in order to give the model a good chance of learning features from the input data. 为什么这样?
      • CNNs learn very quickly, so the dropout layer is intended to help slow down the learning process
      • The pooling layer … consolidating them to only the most essential elements.
    • After the CNN and pooling, the learned features are flattened to one long vector
    • a standard configuration of 64 parallel feature maps and a kernel size of 3 (Where comes this “standard” configuration?)
    • a multi-headed model, where each head of the model reads the input time steps using a different sized kernel.

Extensions

Stacked with RNN

an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. (from here)

My conclusion:

Simple DO NOT use Mendeley.
Zotero can do everything Mendeley could, even more elegantly.

Sadly the development of Docear stopped.

The problem of Mendeley is that it locks you in, in almost every aspect:

  • the database is encrypted and you cannot port out your the data completely to another software.
  • the annotation is not embedded in PDF files, meaning you have to rely on Mendeley to read, search, edit your annotations.
    • no batch operations to export annotations.
Read more »

USNavalResearchLaboratory/TrackerComponentLibrary at GitHub

The Tracker Component Library is a collection of Matlab routines for simulating and tracking targets in various scenarios. Due to the complexity of the target tracking problem, a great many routines can find use in other areas including combinatorics, astronomy, and statistics.

Recently, I ran into a very comprehensive MATLAB repository, which is very rare to my knowledge. Usually people find comprehensive packages in other languages, like Orekit in Java, GMAT in C++, many others in Python, and even one in Julia.
So, I decide to have a look at it and take nots here.