Researchers and grad students have been quickly adopting TensorFlow 2.0 + Keras. Here's a fun collection of generative models: https://t.co/we6rsFi9mX
— François Chollet (@fchollet) May 14, 2019
Researchers and grad students have been quickly adopting TensorFlow 2.0 + Keras. Here's a fun collection of generative models: https://t.co/we6rsFi9mX
— François Chollet (@fchollet) May 14, 2019
A Keras usage pattern that allows for maximum flexibility when defining arbitrary losses and metrics (that don't match the usual signature) is the "endpoint layer" pattern. It works like this: https://t.co/dhYFKeemnC pic.twitter.com/v3YtMrrcQb
— François Chollet (@fchollet) May 14, 2019
You can now easily apply weight pruning to your tf.keras models, to obtain smaller models for inference: https://t.co/xQOsSfR9PT
— François Chollet (@fchollet) May 14, 2019
We’re excited to share that weight pruning is now part of the TensorFlow Model Optimization Toolkit!
— TensorFlow (@TensorFlow) May 14, 2019
Learn how sparsity can dramatically reduce model sizes with negligible accuracy loss.
Read here → https://t.co/bhOm3s6ixP pic.twitter.com/GdngCLKjDa
Introducing TensorFlow Graphics: Computer Graphics Meets Deep Learning by TensorFlow https://t.co/KbboZLw9kM
— Google AI (@GoogleAI) May 10, 2019
Simple Tensorflow implementation of "GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond" https://t.co/9az2SUq54X #deeplearning #machinelearning #ml #ai #neuralnetworks #datascience #tensorflow
— TensorFlow Best Practices (@TFBestPractices) April 29, 2019
Keep track of all the notebooks I make for Swift for TensorFlow in this repo. https://t.co/sissjGm8Xp
— Zaid زيد (@zaidalyafeai) April 26, 2019
Benchmarking Keras and PyTorch Pre-Trained Models. Very nicely done project.https://t.co/3wR1hz74gM
— Jeremy Howard (@jeremyphoward) April 11, 2019
We just added a new tutorial that shows how to write Transformer (‘attention’ is all you need) in #TensorFlow 2.0 from scratch.
— TensorFlow (@TensorFlow) April 11, 2019
Check it out here → https://t.co/6dmNGC9NkE pic.twitter.com/GY1LwdxySK
MLIR is a new compiler framework designed to optimize the execution time of #machinelearning models and make it easier to add support for new devices to TensorFlow.
— TensorFlow (@TensorFlow) April 8, 2019
Check out this article to learn more → https://t.co/9hFRoScSnU
and our Github repo → https://t.co/lL8kuWdZpd pic.twitter.com/blSawHLc8Z
Guide to Coding a Custom Convolutional Neural Network in TensorFlow Core https://t.co/cu4QrGVV1H #AI #DeepLearning #MachineLearning #DataScience pic.twitter.com/ZOKlsNU2FQ
— Mike Tamir, PhD (@MikeTamir) March 31, 2019
Just finished rewriting the ConvNet chapter! 😅
— Aurélien Geron (@aureliengeron) March 24, 2019
Now includes building ResNet-34 in #TensorFlow 2 (see image), fine-tuning a pretrained model, object detection and image segmentation.
I pushed the notebook: https://t.co/YEkeSzGT1V pic.twitter.com/eNt0YhxqIf