Skip to main content
In this section we will go through a hyperparameter optimization scheme based on grid-search that is seamlessly integrated in Keras.
In production environments where training rigs are used either in the cloud or internally, you should use a cloud native platform like Katib or other Kubernetes-compatible solutions.

Run the Tutorial

Open the official TensorFlow Keras Tuner tutorial in Google Colab to execute the code interactively.

Notebook Preview

Key references: (Bengio, 2012; Golovin et al., n.d.; Cheng et al., 2017; Or et al., 2025; Bottou et al., 2016)

References

  • Bengio, Y. (2012). Practical recommendations for gradient-based training of deep architectures.
  • Bottou, L., Curtis, F., Nocedal, J. (2016). Optimization Methods for Large-Scale Machine Learning.
  • Cheng, H., Haque, Z., Hong, L., Ispir, M., Mewald, C., et al. (2017). TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks.
  • Golovin, D., Solnik, B., Moitra, S., Kochanski, G., Karro, J., et al. (n.d.). Google Vizier: A Service for Black-Box Optimization.
  • Or, A., Jain, A., Vega-Myhre, D., Cai, J., Hernandez, C., et al. (2025). TorchAO: PyTorch-native training-to-serving model optimization.