Reinforcement Learning of 3D Musculoskeletal Model for Walking or Running with Minimum Efforts

Authors

  •   Vikram Singh Chandel Data Science Student, Manipal University, B-80 Aakriti Gardens, Nehru Nagar, Bhopal - 462 003
  •   Subhabaha Pal Senior Faculty, Data Science and Machine Learning, with Manipal ProLearn (Manipal Academy of Higher Education – South Bangalore Campus), 3rd Floor, Salarpuria Symphony, 7, Service Road, Pragathi Nagar, Electronics City Post, Bengaluru – 560 100

DOI:

https://doi.org/10.17010/ijcs/2020/v5/i2-3/152870

Keywords:

3D Musculoskeletal Model

, Artificial Intelligence, Reinforcement Learning.

Manuscript Received

, April 3, 2020, Revised, April 25, Accepted, May 2, 2020.

Abstract

In this work we are trying to build a controller for a musculoskeletal model that has the goal of matching a given time-varying velocity vector. The major objective is building a musculoskeletal model which is fully comprehensive and reproduces realistic human movements driven by muscle contraction dynamics. Spectrum of human movement will be generated through variations in the anatomic model.

Downloads

Download data is not yet available.

Downloads

Published

2020-06-30

How to Cite

Chandel, V. S., & Pal, S. (2020). Reinforcement Learning of 3D Musculoskeletal Model for Walking or Running with Minimum Efforts. Indian Journal of Computer Science, 5(2&3), 36–46. https://doi.org/10.17010/ijcs/2020/v5/i2-3/152870

References

K. Arulkumaran, N. Dilokthanakul, M. Shanahan, and A.A. Bharath, "Classifying options for Deep Reinforcement Learning,†2016. Imperial College, London. [Online]. Available: https://arxiv.org/pdf/1604.08153.pdf

P-L. Bacon, J. Harb, and D. Precup, "The option-critic architecture," Reasoning and Learning Lab, McGill University, 2016. [Online]. Available: https://arxiv.org/pdf/1609.05140.pdf

T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,†2018. [Online]. Available: https://arxiv.org/pdf/1801.01290.pd

D. Horgan, J. Quan, D. Budden, G. Barth-Maron, M. Hessel, H. van Hasselt, and D. Silver, “Distributed prioritized experience replay,†2018. [Online]. Available: https://arxiv.org/pdf/1803.00933.pdf

G. Huang, Y. Li, G. Pleiss, Z. Liu, J. E. Hopcroft, and K. Q. Weinberger, Snapshot ensembles: Train 1, get m for free,†2017. [Online]. Available: https://arxiv.org/abs/1704.00109

Z. Huang, S. Zhou, B. Zhuang, and X. Zhou, “Learning to run with actor-critic ensemble,†2017. [Online]. Available: https://arXiv:1712.08987

I. Osband and C. Blundell, “A.P.B.V.R.: Deep exploration via bootstrapped dqn,†2016.

W. Jas´kowski, O. R. Lykkebø, N. E. Toklu, N.E., F. Trifterer, Z. Buk, J. Koutník, and F. Gomez, Reinforcement Learning to Run... Fast. In: S. Escalera, M. Weimer (eds.) NIPS 2017 Competition Book. Springer, 2018.

C. T. John, F. C. Anderson, J. S. Higginson, and S. L. Delp, “Stabilisation of walking by intrinsic muscle properties revealed in a three-dimensional muscle-driven simulation,†Comput. Methods in Biomechanics and Biomedical Eng., vol. 16, no. 4, 2013. https://dx.doi.org/10.1080/10255842.2011.627560

L. Kidzin´ski, S. P. Mohanty, C. Ong, Z. Huang, S. Zhou, A. Pechenko, A. Stelmaszczyk, P. Jarosik, M. Pavlov, and S. Kolesnikov et al., “Learning to run challenge solutions: Adapting reinforcement learning methods for neuromusculoskeletal environments,†2018. [Online]. Available: https://arxiv.org/pdf/1804.00361.pdf

L. Kidzin´ski, S. P. Mohanty, C. Ong, J. Hicks, S. F. Carroll, S. Levine, M. Salathé, and S. L. Delp, “Learning to run challenge: Synthesizing physiologically accurate motion using deep reinforcement learning,†in S. Escalera.

M. Weimer (eds.) NIPS 2017 Competition Book. Springer, Springer 2018. [Online]. Available: https://arxiv.org/abs/1804.00198